00:00:00.001 Started by upstream project "autotest-per-patch" build number 130561 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.034 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.035 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.056 Fetching changes from the remote Git repository 00:00:00.059 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.095 Using shallow fetch with depth 1 00:00:00.095 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.095 > git --version # timeout=10 00:00:00.141 > git --version # 'git version 2.39.2' 00:00:00.142 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:20.111 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:20.126 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:20.142 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:20.142 > git config core.sparsecheckout # timeout=10 00:00:20.156 > git read-tree -mu HEAD # timeout=10 00:00:20.175 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:20.199 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:20.200 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:20.293 [Pipeline] Start of Pipeline 00:00:20.309 [Pipeline] library 00:00:20.311 Loading library shm_lib@master 00:00:20.311 Library shm_lib@master is cached. Copying from home. 00:00:20.326 [Pipeline] node 00:00:20.334 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest 00:00:20.335 [Pipeline] { 00:00:20.345 [Pipeline] catchError 00:00:20.347 [Pipeline] { 00:00:20.359 [Pipeline] wrap 00:00:20.367 [Pipeline] { 00:00:20.376 [Pipeline] stage 00:00:20.378 [Pipeline] { (Prologue) 00:00:20.396 [Pipeline] echo 00:00:20.397 Node: VM-host-WFP1 00:00:20.403 [Pipeline] cleanWs 00:00:20.411 [WS-CLEANUP] Deleting project workspace... 00:00:20.411 [WS-CLEANUP] Deferred wipeout is used... 00:00:20.417 [WS-CLEANUP] done 00:00:20.660 [Pipeline] setCustomBuildProperty 00:00:20.758 [Pipeline] httpRequest 00:00:21.159 [Pipeline] echo 00:00:21.161 Sorcerer 10.211.164.101 is alive 00:00:21.172 [Pipeline] retry 00:00:21.175 [Pipeline] { 00:00:21.190 [Pipeline] httpRequest 00:00:21.195 HttpMethod: GET 00:00:21.195 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:21.196 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:21.221 Response Code: HTTP/1.1 200 OK 00:00:21.222 Success: Status code 200 is in the accepted range: 200,404 00:00:21.223 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:32.436 [Pipeline] } 00:00:32.453 [Pipeline] // retry 00:00:32.460 [Pipeline] sh 00:00:32.744 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:32.759 [Pipeline] httpRequest 00:00:34.078 [Pipeline] echo 00:00:34.080 Sorcerer 10.211.164.101 is alive 00:00:34.091 [Pipeline] retry 00:00:34.093 [Pipeline] { 00:00:34.109 [Pipeline] httpRequest 00:00:34.114 HttpMethod: GET 00:00:34.115 URL: http://10.211.164.101/packages/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:00:34.115 Sending request to url: http://10.211.164.101/packages/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:00:34.120 Response Code: HTTP/1.1 200 OK 00:00:34.121 Success: Status code 200 is in the accepted range: 200,404 00:00:34.121 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:46.478 [Pipeline] } 00:02:46.497 [Pipeline] // retry 00:02:46.505 [Pipeline] sh 00:02:46.784 + tar --no-same-owner -xf spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:49.329 [Pipeline] sh 00:02:49.609 + git -C spdk log --oneline -n5 00:02:49.609 3a41ae5b3 bdev/nvme: controller failover/multipath doc change 00:02:49.609 7b38c9ede bdev/nvme: changed default config to multipath 00:02:49.609 fefe29c8c bdev/nvme: ctrl config consistency check 00:02:49.609 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:49.609 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:02:49.653 [Pipeline] writeFile 00:02:49.668 [Pipeline] sh 00:02:49.951 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:49.962 [Pipeline] sh 00:02:50.239 + cat autorun-spdk.conf 00:02:50.239 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.239 SPDK_RUN_ASAN=1 00:02:50.239 SPDK_RUN_UBSAN=1 00:02:50.239 SPDK_TEST_RAID=1 00:02:50.239 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.244 RUN_NIGHTLY=0 00:02:50.248 [Pipeline] } 00:02:50.261 [Pipeline] // stage 00:02:50.278 [Pipeline] stage 00:02:50.281 [Pipeline] { (Run VM) 00:02:50.293 [Pipeline] sh 00:02:50.565 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:50.565 + echo 'Start stage prepare_nvme.sh' 00:02:50.565 Start stage prepare_nvme.sh 00:02:50.565 + [[ -n 5 ]] 00:02:50.565 + disk_prefix=ex5 00:02:50.565 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:50.565 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:50.565 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:50.565 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.565 ++ SPDK_RUN_ASAN=1 00:02:50.565 ++ SPDK_RUN_UBSAN=1 00:02:50.565 ++ SPDK_TEST_RAID=1 00:02:50.565 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.565 ++ RUN_NIGHTLY=0 00:02:50.566 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:50.566 + nvme_files=() 00:02:50.566 + declare -A nvme_files 00:02:50.566 + backend_dir=/var/lib/libvirt/images/backends 00:02:50.566 + nvme_files['nvme.img']=5G 00:02:50.566 + nvme_files['nvme-cmb.img']=5G 00:02:50.566 + nvme_files['nvme-multi0.img']=4G 00:02:50.566 + nvme_files['nvme-multi1.img']=4G 00:02:50.566 + nvme_files['nvme-multi2.img']=4G 00:02:50.566 + nvme_files['nvme-openstack.img']=8G 00:02:50.566 + nvme_files['nvme-zns.img']=5G 00:02:50.566 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:50.566 + (( SPDK_TEST_FTL == 1 )) 00:02:50.566 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:50.566 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:50.566 + for nvme in "${!nvme_files[@]}" 00:02:50.566 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:50.566 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:50.566 + for nvme in "${!nvme_files[@]}" 00:02:50.566 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:50.566 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:50.566 + for nvme in "${!nvme_files[@]}" 00:02:50.566 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:50.566 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:50.566 + for nvme in "${!nvme_files[@]}" 00:02:50.566 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:50.566 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:50.566 + for nvme in "${!nvme_files[@]}" 00:02:50.566 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:50.566 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:50.566 + for nvme in "${!nvme_files[@]}" 00:02:50.566 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:50.822 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:50.822 + for nvme in "${!nvme_files[@]}" 00:02:50.822 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:50.822 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:50.822 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:50.822 + echo 'End stage prepare_nvme.sh' 00:02:50.822 End stage prepare_nvme.sh 00:02:50.833 [Pipeline] sh 00:02:51.110 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:51.110 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:02:51.110 00:02:51.110 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:51.110 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:51.110 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:51.110 HELP=0 00:02:51.110 DRY_RUN=0 00:02:51.110 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:02:51.110 NVME_DISKS_TYPE=nvme,nvme, 00:02:51.110 NVME_AUTO_CREATE=0 00:02:51.110 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:02:51.110 NVME_CMB=,, 00:02:51.110 NVME_PMR=,, 00:02:51.110 NVME_ZNS=,, 00:02:51.110 NVME_MS=,, 00:02:51.110 NVME_FDP=,, 00:02:51.110 SPDK_VAGRANT_DISTRO=fedora39 00:02:51.110 SPDK_VAGRANT_VMCPU=10 00:02:51.110 SPDK_VAGRANT_VMRAM=12288 00:02:51.110 SPDK_VAGRANT_PROVIDER=libvirt 00:02:51.110 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:51.110 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:51.110 SPDK_OPENSTACK_NETWORK=0 00:02:51.110 VAGRANT_PACKAGE_BOX=0 00:02:51.110 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:51.110 FORCE_DISTRO=true 00:02:51.110 VAGRANT_BOX_VERSION= 00:02:51.110 EXTRA_VAGRANTFILES= 00:02:51.110 NIC_MODEL=e1000 00:02:51.110 00:02:51.110 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:51.110 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:53.722 Bringing machine 'default' up with 'libvirt' provider... 00:02:54.657 ==> default: Creating image (snapshot of base box volume). 00:02:54.916 ==> default: Creating domain with the following settings... 00:02:54.916 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727789824_7896e83c7342e41b101f 00:02:54.916 ==> default: -- Domain type: kvm 00:02:54.916 ==> default: -- Cpus: 10 00:02:54.916 ==> default: -- Feature: acpi 00:02:54.916 ==> default: -- Feature: apic 00:02:54.916 ==> default: -- Feature: pae 00:02:54.916 ==> default: -- Memory: 12288M 00:02:54.916 ==> default: -- Memory Backing: hugepages: 00:02:54.916 ==> default: -- Management MAC: 00:02:54.916 ==> default: -- Loader: 00:02:54.916 ==> default: -- Nvram: 00:02:54.916 ==> default: -- Base box: spdk/fedora39 00:02:54.916 ==> default: -- Storage pool: default 00:02:54.916 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727789824_7896e83c7342e41b101f.img (20G) 00:02:54.916 ==> default: -- Volume Cache: default 00:02:54.916 ==> default: -- Kernel: 00:02:54.916 ==> default: -- Initrd: 00:02:54.916 ==> default: -- Graphics Type: vnc 00:02:54.916 ==> default: -- Graphics Port: -1 00:02:54.916 ==> default: -- Graphics IP: 127.0.0.1 00:02:54.916 ==> default: -- Graphics Password: Not defined 00:02:54.916 ==> default: -- Video Type: cirrus 00:02:54.916 ==> default: -- Video VRAM: 9216 00:02:54.916 ==> default: -- Sound Type: 00:02:54.916 ==> default: -- Keymap: en-us 00:02:54.916 ==> default: -- TPM Path: 00:02:54.916 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:54.916 ==> default: -- Command line args: 00:02:54.916 ==> default: -> value=-device, 00:02:54.916 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:54.917 ==> default: -> value=-drive, 00:02:54.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:02:54.917 ==> default: -> value=-device, 00:02:54.917 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:54.917 ==> default: -> value=-device, 00:02:54.917 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:54.917 ==> default: -> value=-drive, 00:02:54.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:54.917 ==> default: -> value=-device, 00:02:54.917 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:54.917 ==> default: -> value=-drive, 00:02:54.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:54.917 ==> default: -> value=-device, 00:02:54.917 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:54.917 ==> default: -> value=-drive, 00:02:54.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:54.917 ==> default: -> value=-device, 00:02:54.917 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:55.485 ==> default: Creating shared folders metadata... 00:02:55.485 ==> default: Starting domain. 00:02:56.903 ==> default: Waiting for domain to get an IP address... 00:03:15.006 ==> default: Waiting for SSH to become available... 00:03:15.006 ==> default: Configuring and enabling network interfaces... 00:03:18.364 default: SSH address: 192.168.121.38:22 00:03:18.364 default: SSH username: vagrant 00:03:18.364 default: SSH auth method: private key 00:03:21.655 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:29.793 ==> default: Mounting SSHFS shared folder... 00:03:32.343 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:32.343 ==> default: Checking Mount.. 00:03:34.246 ==> default: Folder Successfully Mounted! 00:03:34.246 ==> default: Running provisioner: file... 00:03:35.249 default: ~/.gitconfig => .gitconfig 00:03:35.814 00:03:35.814 SUCCESS! 00:03:35.814 00:03:35.814 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:35.814 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:35.814 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:35.814 00:03:35.822 [Pipeline] } 00:03:35.833 [Pipeline] // stage 00:03:35.840 [Pipeline] dir 00:03:35.841 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:03:35.842 [Pipeline] { 00:03:35.852 [Pipeline] catchError 00:03:35.853 [Pipeline] { 00:03:35.863 [Pipeline] sh 00:03:36.143 + vagrant ssh-config --host vagrant 00:03:36.143 + sed -ne /^Host/,$p 00:03:36.143 + tee ssh_conf 00:03:39.480 Host vagrant 00:03:39.480 HostName 192.168.121.38 00:03:39.480 User vagrant 00:03:39.480 Port 22 00:03:39.480 UserKnownHostsFile /dev/null 00:03:39.480 StrictHostKeyChecking no 00:03:39.480 PasswordAuthentication no 00:03:39.480 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:39.480 IdentitiesOnly yes 00:03:39.480 LogLevel FATAL 00:03:39.480 ForwardAgent yes 00:03:39.480 ForwardX11 yes 00:03:39.480 00:03:39.494 [Pipeline] withEnv 00:03:39.496 [Pipeline] { 00:03:39.511 [Pipeline] sh 00:03:39.792 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:39.792 source /etc/os-release 00:03:39.792 [[ -e /image.version ]] && img=$(< /image.version) 00:03:39.792 # Minimal, systemd-like check. 00:03:39.792 if [[ -e /.dockerenv ]]; then 00:03:39.792 # Clear garbage from the node's name: 00:03:39.792 # agt-er_autotest_547-896 -> autotest_547-896 00:03:39.792 # $HOSTNAME is the actual container id 00:03:39.792 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:39.792 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:39.792 # We can assume this is a mount from a host where container is running, 00:03:39.792 # so fetch its hostname to easily identify the target swarm worker. 00:03:39.792 container="$(< /etc/hostname) ($agent)" 00:03:39.792 else 00:03:39.792 # Fallback 00:03:39.792 container=$agent 00:03:39.792 fi 00:03:39.792 fi 00:03:39.792 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:39.792 00:03:40.062 [Pipeline] } 00:03:40.077 [Pipeline] // withEnv 00:03:40.086 [Pipeline] setCustomBuildProperty 00:03:40.099 [Pipeline] stage 00:03:40.102 [Pipeline] { (Tests) 00:03:40.118 [Pipeline] sh 00:03:40.396 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:40.667 [Pipeline] sh 00:03:40.977 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:41.249 [Pipeline] timeout 00:03:41.249 Timeout set to expire in 1 hr 30 min 00:03:41.251 [Pipeline] { 00:03:41.265 [Pipeline] sh 00:03:41.546 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:42.112 HEAD is now at 3a41ae5b3 bdev/nvme: controller failover/multipath doc change 00:03:42.123 [Pipeline] sh 00:03:42.419 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:42.690 [Pipeline] sh 00:03:42.971 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:43.244 [Pipeline] sh 00:03:43.519 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:43.778 ++ readlink -f spdk_repo 00:03:43.778 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:43.778 + [[ -n /home/vagrant/spdk_repo ]] 00:03:43.778 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:43.778 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:43.778 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:43.778 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:43.778 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:43.778 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:43.778 + cd /home/vagrant/spdk_repo 00:03:43.778 + source /etc/os-release 00:03:43.778 ++ NAME='Fedora Linux' 00:03:43.778 ++ VERSION='39 (Cloud Edition)' 00:03:43.778 ++ ID=fedora 00:03:43.778 ++ VERSION_ID=39 00:03:43.778 ++ VERSION_CODENAME= 00:03:43.778 ++ PLATFORM_ID=platform:f39 00:03:43.778 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:43.778 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:43.778 ++ LOGO=fedora-logo-icon 00:03:43.778 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:43.778 ++ HOME_URL=https://fedoraproject.org/ 00:03:43.778 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:43.778 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:43.778 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:43.778 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:43.778 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:43.778 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:43.778 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:43.778 ++ SUPPORT_END=2024-11-12 00:03:43.778 ++ VARIANT='Cloud Edition' 00:03:43.778 ++ VARIANT_ID=cloud 00:03:43.778 + uname -a 00:03:43.778 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:43.778 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:44.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.409 Hugepages 00:03:44.409 node hugesize free / total 00:03:44.409 node0 1048576kB 0 / 0 00:03:44.409 node0 2048kB 0 / 0 00:03:44.409 00:03:44.409 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:44.409 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:44.409 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:44.409 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:44.409 + rm -f /tmp/spdk-ld-path 00:03:44.409 + source autorun-spdk.conf 00:03:44.409 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:44.409 ++ SPDK_RUN_ASAN=1 00:03:44.409 ++ SPDK_RUN_UBSAN=1 00:03:44.409 ++ SPDK_TEST_RAID=1 00:03:44.409 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:44.409 ++ RUN_NIGHTLY=0 00:03:44.409 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:44.409 + [[ -n '' ]] 00:03:44.409 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:44.409 + for M in /var/spdk/build-*-manifest.txt 00:03:44.409 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:44.409 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:44.409 + for M in /var/spdk/build-*-manifest.txt 00:03:44.409 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:44.409 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:44.409 + for M in /var/spdk/build-*-manifest.txt 00:03:44.409 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:44.409 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:44.409 ++ uname 00:03:44.409 + [[ Linux == \L\i\n\u\x ]] 00:03:44.409 + sudo dmesg -T 00:03:44.409 + sudo dmesg --clear 00:03:44.669 + dmesg_pid=5201 00:03:44.669 + [[ Fedora Linux == FreeBSD ]] 00:03:44.669 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:44.669 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:44.669 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:44.669 + [[ -x /usr/src/fio-static/fio ]] 00:03:44.669 + export FIO_BIN=/usr/src/fio-static/fio 00:03:44.669 + FIO_BIN=/usr/src/fio-static/fio 00:03:44.669 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:44.669 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:44.669 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:44.669 + sudo dmesg -Tw 00:03:44.669 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:44.669 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:44.669 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:44.669 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:44.669 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:44.669 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:44.669 Test configuration: 00:03:44.669 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:44.669 SPDK_RUN_ASAN=1 00:03:44.669 SPDK_RUN_UBSAN=1 00:03:44.669 SPDK_TEST_RAID=1 00:03:44.669 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:44.669 RUN_NIGHTLY=0 13:37:54 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:44.669 13:37:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:44.669 13:37:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:44.669 13:37:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:44.669 13:37:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.669 13:37:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.669 13:37:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.669 13:37:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.669 13:37:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.669 13:37:54 -- paths/export.sh@5 -- $ export PATH 00:03:44.669 13:37:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.669 13:37:54 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.669 13:37:54 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:44.669 13:37:54 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727789874.XXXXXX 00:03:44.669 13:37:54 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727789874.wP776O 00:03:44.669 13:37:54 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:44.669 13:37:54 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:03:44.669 13:37:54 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:44.669 13:37:54 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:44.669 13:37:54 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:44.669 13:37:54 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:44.669 13:37:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:44.669 13:37:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.669 13:37:54 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:44.669 13:37:54 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:44.669 13:37:54 -- pm/common@17 -- $ local monitor 00:03:44.669 13:37:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.669 13:37:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.669 13:37:54 -- pm/common@21 -- $ date +%s 00:03:44.669 13:37:54 -- pm/common@25 -- $ sleep 1 00:03:44.669 13:37:54 -- pm/common@21 -- $ date +%s 00:03:44.669 13:37:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727789874 00:03:44.669 13:37:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727789874 00:03:44.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727789874_collect-cpu-load.pm.log 00:03:44.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727789874_collect-vmstat.pm.log 00:03:45.608 13:37:55 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:45.608 13:37:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:45.608 13:37:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:45.608 13:37:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:45.608 13:37:55 -- spdk/autobuild.sh@16 -- $ date -u 00:03:45.608 Tue Oct 1 01:37:55 PM UTC 2024 00:03:45.608 13:37:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:45.608 v25.01-pre-20-g3a41ae5b3 00:03:45.608 13:37:55 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:45.608 13:37:55 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:45.608 13:37:55 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:45.608 13:37:55 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:45.608 13:37:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.608 ************************************ 00:03:45.608 START TEST asan 00:03:45.608 ************************************ 00:03:45.608 using asan 00:03:45.608 13:37:55 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:45.608 00:03:45.608 real 0m0.000s 00:03:45.608 user 0m0.000s 00:03:45.608 sys 0m0.000s 00:03:45.608 13:37:55 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:45.608 13:37:55 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:45.608 ************************************ 00:03:45.608 END TEST asan 00:03:45.608 ************************************ 00:03:45.867 13:37:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:45.867 13:37:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:45.867 13:37:55 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:45.867 13:37:55 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:45.867 13:37:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.867 ************************************ 00:03:45.867 START TEST ubsan 00:03:45.867 ************************************ 00:03:45.867 using ubsan 00:03:45.867 13:37:55 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:45.867 00:03:45.867 real 0m0.001s 00:03:45.867 user 0m0.001s 00:03:45.867 sys 0m0.000s 00:03:45.867 13:37:55 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:45.867 13:37:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:45.867 ************************************ 00:03:45.867 END TEST ubsan 00:03:45.867 ************************************ 00:03:45.867 13:37:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:45.867 13:37:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:45.867 13:37:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:45.867 13:37:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:45.867 13:37:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:45.867 13:37:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:45.867 13:37:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:45.867 13:37:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:45.867 13:37:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:45.867 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:45.867 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:46.435 Using 'verbs' RDMA provider 00:04:02.681 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:17.625 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:17.626 Creating mk/config.mk...done. 00:04:17.626 Creating mk/cc.flags.mk...done. 00:04:17.626 Type 'make' to build. 00:04:17.626 13:38:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:17.626 13:38:27 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:17.626 13:38:27 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:17.626 13:38:27 -- common/autotest_common.sh@10 -- $ set +x 00:04:17.626 ************************************ 00:04:17.626 START TEST make 00:04:17.626 ************************************ 00:04:17.626 13:38:27 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:17.626 make[1]: Nothing to be done for 'all'. 00:04:29.902 The Meson build system 00:04:29.902 Version: 1.5.0 00:04:29.902 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:29.902 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:29.902 Build type: native build 00:04:29.902 Program cat found: YES (/usr/bin/cat) 00:04:29.902 Project name: DPDK 00:04:29.902 Project version: 24.03.0 00:04:29.902 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:29.902 C linker for the host machine: cc ld.bfd 2.40-14 00:04:29.902 Host machine cpu family: x86_64 00:04:29.902 Host machine cpu: x86_64 00:04:29.902 Message: ## Building in Developer Mode ## 00:04:29.902 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:29.902 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:29.902 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:29.902 Program python3 found: YES (/usr/bin/python3) 00:04:29.902 Program cat found: YES (/usr/bin/cat) 00:04:29.902 Compiler for C supports arguments -march=native: YES 00:04:29.902 Checking for size of "void *" : 8 00:04:29.902 Checking for size of "void *" : 8 (cached) 00:04:29.902 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:29.902 Library m found: YES 00:04:29.902 Library numa found: YES 00:04:29.902 Has header "numaif.h" : YES 00:04:29.902 Library fdt found: NO 00:04:29.902 Library execinfo found: NO 00:04:29.902 Has header "execinfo.h" : YES 00:04:29.902 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:29.902 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:29.902 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:29.902 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:29.902 Run-time dependency openssl found: YES 3.1.1 00:04:29.902 Run-time dependency libpcap found: YES 1.10.4 00:04:29.902 Has header "pcap.h" with dependency libpcap: YES 00:04:29.902 Compiler for C supports arguments -Wcast-qual: YES 00:04:29.902 Compiler for C supports arguments -Wdeprecated: YES 00:04:29.902 Compiler for C supports arguments -Wformat: YES 00:04:29.902 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:29.902 Compiler for C supports arguments -Wformat-security: NO 00:04:29.902 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:29.902 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:29.902 Compiler for C supports arguments -Wnested-externs: YES 00:04:29.902 Compiler for C supports arguments -Wold-style-definition: YES 00:04:29.902 Compiler for C supports arguments -Wpointer-arith: YES 00:04:29.902 Compiler for C supports arguments -Wsign-compare: YES 00:04:29.902 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:29.902 Compiler for C supports arguments -Wundef: YES 00:04:29.902 Compiler for C supports arguments -Wwrite-strings: YES 00:04:29.902 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:29.902 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:29.902 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:29.902 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:29.902 Program objdump found: YES (/usr/bin/objdump) 00:04:29.902 Compiler for C supports arguments -mavx512f: YES 00:04:29.902 Checking if "AVX512 checking" compiles: YES 00:04:29.902 Fetching value of define "__SSE4_2__" : 1 00:04:29.902 Fetching value of define "__AES__" : 1 00:04:29.902 Fetching value of define "__AVX__" : 1 00:04:29.902 Fetching value of define "__AVX2__" : 1 00:04:29.902 Fetching value of define "__AVX512BW__" : 1 00:04:29.902 Fetching value of define "__AVX512CD__" : 1 00:04:29.902 Fetching value of define "__AVX512DQ__" : 1 00:04:29.902 Fetching value of define "__AVX512F__" : 1 00:04:29.902 Fetching value of define "__AVX512VL__" : 1 00:04:29.902 Fetching value of define "__PCLMUL__" : 1 00:04:29.902 Fetching value of define "__RDRND__" : 1 00:04:29.902 Fetching value of define "__RDSEED__" : 1 00:04:29.902 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:29.902 Fetching value of define "__znver1__" : (undefined) 00:04:29.902 Fetching value of define "__znver2__" : (undefined) 00:04:29.902 Fetching value of define "__znver3__" : (undefined) 00:04:29.902 Fetching value of define "__znver4__" : (undefined) 00:04:29.902 Library asan found: YES 00:04:29.902 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:29.902 Message: lib/log: Defining dependency "log" 00:04:29.902 Message: lib/kvargs: Defining dependency "kvargs" 00:04:29.902 Message: lib/telemetry: Defining dependency "telemetry" 00:04:29.902 Library rt found: YES 00:04:29.902 Checking for function "getentropy" : NO 00:04:29.902 Message: lib/eal: Defining dependency "eal" 00:04:29.902 Message: lib/ring: Defining dependency "ring" 00:04:29.902 Message: lib/rcu: Defining dependency "rcu" 00:04:29.902 Message: lib/mempool: Defining dependency "mempool" 00:04:29.902 Message: lib/mbuf: Defining dependency "mbuf" 00:04:29.902 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:29.902 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:29.902 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:29.902 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:29.902 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:29.902 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:29.902 Compiler for C supports arguments -mpclmul: YES 00:04:29.902 Compiler for C supports arguments -maes: YES 00:04:29.902 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:29.902 Compiler for C supports arguments -mavx512bw: YES 00:04:29.902 Compiler for C supports arguments -mavx512dq: YES 00:04:29.902 Compiler for C supports arguments -mavx512vl: YES 00:04:29.902 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:29.902 Compiler for C supports arguments -mavx2: YES 00:04:29.902 Compiler for C supports arguments -mavx: YES 00:04:29.902 Message: lib/net: Defining dependency "net" 00:04:29.902 Message: lib/meter: Defining dependency "meter" 00:04:29.902 Message: lib/ethdev: Defining dependency "ethdev" 00:04:29.902 Message: lib/pci: Defining dependency "pci" 00:04:29.902 Message: lib/cmdline: Defining dependency "cmdline" 00:04:29.902 Message: lib/hash: Defining dependency "hash" 00:04:29.902 Message: lib/timer: Defining dependency "timer" 00:04:29.902 Message: lib/compressdev: Defining dependency "compressdev" 00:04:29.902 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:29.902 Message: lib/dmadev: Defining dependency "dmadev" 00:04:29.903 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:29.903 Message: lib/power: Defining dependency "power" 00:04:29.903 Message: lib/reorder: Defining dependency "reorder" 00:04:29.903 Message: lib/security: Defining dependency "security" 00:04:29.903 Has header "linux/userfaultfd.h" : YES 00:04:29.903 Has header "linux/vduse.h" : YES 00:04:29.903 Message: lib/vhost: Defining dependency "vhost" 00:04:29.903 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:29.903 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:29.903 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:29.903 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:29.903 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:29.903 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:29.903 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:29.903 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:29.903 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:29.903 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:29.903 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:29.903 Configuring doxy-api-html.conf using configuration 00:04:29.903 Configuring doxy-api-man.conf using configuration 00:04:29.903 Program mandb found: YES (/usr/bin/mandb) 00:04:29.903 Program sphinx-build found: NO 00:04:29.903 Configuring rte_build_config.h using configuration 00:04:29.903 Message: 00:04:29.903 ================= 00:04:29.903 Applications Enabled 00:04:29.903 ================= 00:04:29.903 00:04:29.903 apps: 00:04:29.903 00:04:29.903 00:04:29.903 Message: 00:04:29.903 ================= 00:04:29.903 Libraries Enabled 00:04:29.903 ================= 00:04:29.903 00:04:29.903 libs: 00:04:29.903 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:29.903 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:29.903 cryptodev, dmadev, power, reorder, security, vhost, 00:04:29.903 00:04:29.903 Message: 00:04:29.903 =============== 00:04:29.903 Drivers Enabled 00:04:29.903 =============== 00:04:29.903 00:04:29.903 common: 00:04:29.903 00:04:29.903 bus: 00:04:29.903 pci, vdev, 00:04:29.903 mempool: 00:04:29.903 ring, 00:04:29.903 dma: 00:04:29.903 00:04:29.903 net: 00:04:29.903 00:04:29.903 crypto: 00:04:29.903 00:04:29.903 compress: 00:04:29.903 00:04:29.903 vdpa: 00:04:29.903 00:04:29.903 00:04:29.903 Message: 00:04:29.903 ================= 00:04:29.903 Content Skipped 00:04:29.903 ================= 00:04:29.903 00:04:29.903 apps: 00:04:29.903 dumpcap: explicitly disabled via build config 00:04:29.903 graph: explicitly disabled via build config 00:04:29.903 pdump: explicitly disabled via build config 00:04:29.903 proc-info: explicitly disabled via build config 00:04:29.903 test-acl: explicitly disabled via build config 00:04:29.903 test-bbdev: explicitly disabled via build config 00:04:29.903 test-cmdline: explicitly disabled via build config 00:04:29.903 test-compress-perf: explicitly disabled via build config 00:04:29.903 test-crypto-perf: explicitly disabled via build config 00:04:29.903 test-dma-perf: explicitly disabled via build config 00:04:29.903 test-eventdev: explicitly disabled via build config 00:04:29.903 test-fib: explicitly disabled via build config 00:04:29.903 test-flow-perf: explicitly disabled via build config 00:04:29.903 test-gpudev: explicitly disabled via build config 00:04:29.903 test-mldev: explicitly disabled via build config 00:04:29.903 test-pipeline: explicitly disabled via build config 00:04:29.903 test-pmd: explicitly disabled via build config 00:04:29.903 test-regex: explicitly disabled via build config 00:04:29.903 test-sad: explicitly disabled via build config 00:04:29.903 test-security-perf: explicitly disabled via build config 00:04:29.903 00:04:29.903 libs: 00:04:29.903 argparse: explicitly disabled via build config 00:04:29.903 metrics: explicitly disabled via build config 00:04:29.903 acl: explicitly disabled via build config 00:04:29.903 bbdev: explicitly disabled via build config 00:04:29.903 bitratestats: explicitly disabled via build config 00:04:29.903 bpf: explicitly disabled via build config 00:04:29.903 cfgfile: explicitly disabled via build config 00:04:29.903 distributor: explicitly disabled via build config 00:04:29.903 efd: explicitly disabled via build config 00:04:29.903 eventdev: explicitly disabled via build config 00:04:29.903 dispatcher: explicitly disabled via build config 00:04:29.903 gpudev: explicitly disabled via build config 00:04:29.903 gro: explicitly disabled via build config 00:04:29.903 gso: explicitly disabled via build config 00:04:29.903 ip_frag: explicitly disabled via build config 00:04:29.903 jobstats: explicitly disabled via build config 00:04:29.903 latencystats: explicitly disabled via build config 00:04:29.903 lpm: explicitly disabled via build config 00:04:29.903 member: explicitly disabled via build config 00:04:29.903 pcapng: explicitly disabled via build config 00:04:29.903 rawdev: explicitly disabled via build config 00:04:29.903 regexdev: explicitly disabled via build config 00:04:29.903 mldev: explicitly disabled via build config 00:04:29.903 rib: explicitly disabled via build config 00:04:29.903 sched: explicitly disabled via build config 00:04:29.903 stack: explicitly disabled via build config 00:04:29.903 ipsec: explicitly disabled via build config 00:04:29.903 pdcp: explicitly disabled via build config 00:04:29.903 fib: explicitly disabled via build config 00:04:29.903 port: explicitly disabled via build config 00:04:29.903 pdump: explicitly disabled via build config 00:04:29.903 table: explicitly disabled via build config 00:04:29.903 pipeline: explicitly disabled via build config 00:04:29.904 graph: explicitly disabled via build config 00:04:29.904 node: explicitly disabled via build config 00:04:29.904 00:04:29.904 drivers: 00:04:29.904 common/cpt: not in enabled drivers build config 00:04:29.904 common/dpaax: not in enabled drivers build config 00:04:29.904 common/iavf: not in enabled drivers build config 00:04:29.904 common/idpf: not in enabled drivers build config 00:04:29.904 common/ionic: not in enabled drivers build config 00:04:29.904 common/mvep: not in enabled drivers build config 00:04:29.904 common/octeontx: not in enabled drivers build config 00:04:29.904 bus/auxiliary: not in enabled drivers build config 00:04:29.904 bus/cdx: not in enabled drivers build config 00:04:29.904 bus/dpaa: not in enabled drivers build config 00:04:29.904 bus/fslmc: not in enabled drivers build config 00:04:29.904 bus/ifpga: not in enabled drivers build config 00:04:29.904 bus/platform: not in enabled drivers build config 00:04:29.904 bus/uacce: not in enabled drivers build config 00:04:29.904 bus/vmbus: not in enabled drivers build config 00:04:29.904 common/cnxk: not in enabled drivers build config 00:04:29.904 common/mlx5: not in enabled drivers build config 00:04:29.904 common/nfp: not in enabled drivers build config 00:04:29.904 common/nitrox: not in enabled drivers build config 00:04:29.904 common/qat: not in enabled drivers build config 00:04:29.904 common/sfc_efx: not in enabled drivers build config 00:04:29.904 mempool/bucket: not in enabled drivers build config 00:04:29.904 mempool/cnxk: not in enabled drivers build config 00:04:29.904 mempool/dpaa: not in enabled drivers build config 00:04:29.904 mempool/dpaa2: not in enabled drivers build config 00:04:29.904 mempool/octeontx: not in enabled drivers build config 00:04:29.904 mempool/stack: not in enabled drivers build config 00:04:29.904 dma/cnxk: not in enabled drivers build config 00:04:29.904 dma/dpaa: not in enabled drivers build config 00:04:29.904 dma/dpaa2: not in enabled drivers build config 00:04:29.904 dma/hisilicon: not in enabled drivers build config 00:04:29.904 dma/idxd: not in enabled drivers build config 00:04:29.904 dma/ioat: not in enabled drivers build config 00:04:29.904 dma/skeleton: not in enabled drivers build config 00:04:29.904 net/af_packet: not in enabled drivers build config 00:04:29.904 net/af_xdp: not in enabled drivers build config 00:04:29.904 net/ark: not in enabled drivers build config 00:04:29.904 net/atlantic: not in enabled drivers build config 00:04:29.904 net/avp: not in enabled drivers build config 00:04:29.904 net/axgbe: not in enabled drivers build config 00:04:29.904 net/bnx2x: not in enabled drivers build config 00:04:29.904 net/bnxt: not in enabled drivers build config 00:04:29.904 net/bonding: not in enabled drivers build config 00:04:29.904 net/cnxk: not in enabled drivers build config 00:04:29.904 net/cpfl: not in enabled drivers build config 00:04:29.904 net/cxgbe: not in enabled drivers build config 00:04:29.904 net/dpaa: not in enabled drivers build config 00:04:29.904 net/dpaa2: not in enabled drivers build config 00:04:29.904 net/e1000: not in enabled drivers build config 00:04:29.904 net/ena: not in enabled drivers build config 00:04:29.904 net/enetc: not in enabled drivers build config 00:04:29.904 net/enetfec: not in enabled drivers build config 00:04:29.904 net/enic: not in enabled drivers build config 00:04:29.904 net/failsafe: not in enabled drivers build config 00:04:29.904 net/fm10k: not in enabled drivers build config 00:04:29.904 net/gve: not in enabled drivers build config 00:04:29.904 net/hinic: not in enabled drivers build config 00:04:29.904 net/hns3: not in enabled drivers build config 00:04:29.904 net/i40e: not in enabled drivers build config 00:04:29.904 net/iavf: not in enabled drivers build config 00:04:29.904 net/ice: not in enabled drivers build config 00:04:29.904 net/idpf: not in enabled drivers build config 00:04:29.904 net/igc: not in enabled drivers build config 00:04:29.904 net/ionic: not in enabled drivers build config 00:04:29.904 net/ipn3ke: not in enabled drivers build config 00:04:29.904 net/ixgbe: not in enabled drivers build config 00:04:29.904 net/mana: not in enabled drivers build config 00:04:29.904 net/memif: not in enabled drivers build config 00:04:29.904 net/mlx4: not in enabled drivers build config 00:04:29.904 net/mlx5: not in enabled drivers build config 00:04:29.904 net/mvneta: not in enabled drivers build config 00:04:29.904 net/mvpp2: not in enabled drivers build config 00:04:29.904 net/netvsc: not in enabled drivers build config 00:04:29.904 net/nfb: not in enabled drivers build config 00:04:29.904 net/nfp: not in enabled drivers build config 00:04:29.904 net/ngbe: not in enabled drivers build config 00:04:29.904 net/null: not in enabled drivers build config 00:04:29.904 net/octeontx: not in enabled drivers build config 00:04:29.904 net/octeon_ep: not in enabled drivers build config 00:04:29.904 net/pcap: not in enabled drivers build config 00:04:29.904 net/pfe: not in enabled drivers build config 00:04:29.904 net/qede: not in enabled drivers build config 00:04:29.904 net/ring: not in enabled drivers build config 00:04:29.904 net/sfc: not in enabled drivers build config 00:04:29.904 net/softnic: not in enabled drivers build config 00:04:29.904 net/tap: not in enabled drivers build config 00:04:29.904 net/thunderx: not in enabled drivers build config 00:04:29.904 net/txgbe: not in enabled drivers build config 00:04:29.904 net/vdev_netvsc: not in enabled drivers build config 00:04:29.904 net/vhost: not in enabled drivers build config 00:04:29.904 net/virtio: not in enabled drivers build config 00:04:29.904 net/vmxnet3: not in enabled drivers build config 00:04:29.904 raw/*: missing internal dependency, "rawdev" 00:04:29.904 crypto/armv8: not in enabled drivers build config 00:04:29.904 crypto/bcmfs: not in enabled drivers build config 00:04:29.904 crypto/caam_jr: not in enabled drivers build config 00:04:29.904 crypto/ccp: not in enabled drivers build config 00:04:29.904 crypto/cnxk: not in enabled drivers build config 00:04:29.904 crypto/dpaa_sec: not in enabled drivers build config 00:04:29.904 crypto/dpaa2_sec: not in enabled drivers build config 00:04:29.904 crypto/ipsec_mb: not in enabled drivers build config 00:04:29.904 crypto/mlx5: not in enabled drivers build config 00:04:29.904 crypto/mvsam: not in enabled drivers build config 00:04:29.905 crypto/nitrox: not in enabled drivers build config 00:04:29.905 crypto/null: not in enabled drivers build config 00:04:29.905 crypto/octeontx: not in enabled drivers build config 00:04:29.905 crypto/openssl: not in enabled drivers build config 00:04:29.905 crypto/scheduler: not in enabled drivers build config 00:04:29.905 crypto/uadk: not in enabled drivers build config 00:04:29.905 crypto/virtio: not in enabled drivers build config 00:04:29.905 compress/isal: not in enabled drivers build config 00:04:29.905 compress/mlx5: not in enabled drivers build config 00:04:29.905 compress/nitrox: not in enabled drivers build config 00:04:29.905 compress/octeontx: not in enabled drivers build config 00:04:29.905 compress/zlib: not in enabled drivers build config 00:04:29.905 regex/*: missing internal dependency, "regexdev" 00:04:29.905 ml/*: missing internal dependency, "mldev" 00:04:29.905 vdpa/ifc: not in enabled drivers build config 00:04:29.905 vdpa/mlx5: not in enabled drivers build config 00:04:29.905 vdpa/nfp: not in enabled drivers build config 00:04:29.905 vdpa/sfc: not in enabled drivers build config 00:04:29.905 event/*: missing internal dependency, "eventdev" 00:04:29.905 baseband/*: missing internal dependency, "bbdev" 00:04:29.905 gpu/*: missing internal dependency, "gpudev" 00:04:29.905 00:04:29.905 00:04:29.905 Build targets in project: 85 00:04:29.905 00:04:29.905 DPDK 24.03.0 00:04:29.905 00:04:29.905 User defined options 00:04:29.905 buildtype : debug 00:04:29.905 default_library : shared 00:04:29.905 libdir : lib 00:04:29.905 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:29.905 b_sanitize : address 00:04:29.905 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:29.905 c_link_args : 00:04:29.905 cpu_instruction_set: native 00:04:29.905 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:29.905 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:29.905 enable_docs : false 00:04:29.905 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:29.905 enable_kmods : false 00:04:29.905 max_lcores : 128 00:04:29.905 tests : false 00:04:29.905 00:04:29.905 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:29.905 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:29.905 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:29.905 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:29.905 [3/268] Linking static target lib/librte_kvargs.a 00:04:29.905 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:29.905 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:29.905 [6/268] Linking static target lib/librte_log.a 00:04:29.905 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:29.905 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:29.905 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:29.905 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.905 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:29.905 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:29.905 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:29.905 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:29.905 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:29.905 [16/268] Linking static target lib/librte_telemetry.a 00:04:29.905 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:29.905 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:30.163 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:30.163 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.420 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:30.420 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:30.420 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:30.420 [24/268] Linking target lib/librte_log.so.24.1 00:04:30.420 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:30.420 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:30.420 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:30.420 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:30.420 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:30.420 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:30.677 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:30.677 [32/268] Linking target lib/librte_kvargs.so.24.1 00:04:30.677 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.677 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:30.935 [35/268] Linking target lib/librte_telemetry.so.24.1 00:04:30.935 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:30.935 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:30.935 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:30.935 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:30.935 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:31.192 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:31.192 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:31.192 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:31.192 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:31.192 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:31.192 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:31.192 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:31.450 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:31.450 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:31.450 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:31.708 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:31.708 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:31.708 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:31.708 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:31.966 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:31.966 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:31.966 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:31.966 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:31.966 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:31.966 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:32.224 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:32.224 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:32.224 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:32.224 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:32.224 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:32.481 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:32.481 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:32.481 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:32.481 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:32.739 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:32.739 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:32.739 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:32.739 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:32.739 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:32.739 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:32.739 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:32.997 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:32.997 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:32.997 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:32.997 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:32.997 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:33.255 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:33.255 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:33.255 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:33.255 [85/268] Linking static target lib/librte_eal.a 00:04:33.255 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:33.512 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:33.512 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:33.512 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:33.512 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:33.512 [91/268] Linking static target lib/librte_ring.a 00:04:33.512 [92/268] Linking static target lib/librte_rcu.a 00:04:33.772 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:33.772 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:33.772 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:33.772 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:33.772 [97/268] Linking static target lib/librte_mempool.a 00:04:34.031 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:34.031 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:34.031 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.031 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.290 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:34.290 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:34.290 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:34.290 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:34.290 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:34.290 [107/268] Linking static target lib/librte_net.a 00:04:34.550 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:34.550 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:34.550 [110/268] Linking static target lib/librte_meter.a 00:04:34.550 [111/268] Linking static target lib/librte_mbuf.a 00:04:34.809 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:34.809 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:34.809 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:34.809 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.068 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:35.069 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.069 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.069 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:35.328 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:35.586 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:35.586 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:35.586 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:35.845 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:35.845 [125/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.845 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:35.845 [127/268] Linking static target lib/librte_pci.a 00:04:35.845 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:36.104 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:36.104 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:36.104 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:36.104 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.104 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:36.104 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:36.104 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:36.362 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:36.362 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:36.362 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:36.362 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:36.362 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:36.362 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:36.362 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:36.362 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:36.362 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:36.362 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:36.620 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:36.620 [147/268] Linking static target lib/librte_cmdline.a 00:04:36.620 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:36.620 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:36.880 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:36.880 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:36.880 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:37.140 [153/268] Linking static target lib/librte_timer.a 00:04:37.140 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:37.140 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:37.399 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:37.399 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:37.399 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:37.399 [159/268] Linking static target lib/librte_compressdev.a 00:04:37.659 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:37.659 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.659 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:37.918 [163/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:37.918 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:37.918 [165/268] Linking static target lib/librte_dmadev.a 00:04:38.180 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:38.180 [167/268] Linking static target lib/librte_hash.a 00:04:38.180 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:38.180 [169/268] Linking static target lib/librte_ethdev.a 00:04:38.180 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:38.180 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:38.180 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:38.180 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.439 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:38.439 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.698 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:38.698 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:38.698 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:38.959 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:38.959 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:38.959 [181/268] Linking static target lib/librte_cryptodev.a 00:04:38.959 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:38.959 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.218 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:39.218 [185/268] Linking static target lib/librte_power.a 00:04:39.478 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:39.478 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:39.478 [188/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.478 [189/268] Linking static target lib/librte_reorder.a 00:04:39.478 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:39.478 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:39.478 [192/268] Linking static target lib/librte_security.a 00:04:39.478 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:40.046 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.304 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.304 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:40.563 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.563 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:40.563 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:40.563 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:40.844 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:40.844 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:41.119 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:41.120 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:41.120 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:41.120 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:41.379 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:41.379 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:41.379 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:41.379 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:41.638 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.638 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:41.638 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:41.638 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:41.638 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:41.638 [216/268] Linking static target drivers/librte_bus_vdev.a 00:04:41.898 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:41.898 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:41.898 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:41.898 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:41.898 [221/268] Linking static target drivers/librte_bus_pci.a 00:04:41.898 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:41.898 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:41.898 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:41.898 [225/268] Linking static target drivers/librte_mempool_ring.a 00:04:42.157 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.415 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.983 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:46.271 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.530 [230/268] Linking target lib/librte_eal.so.24.1 00:04:46.530 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:46.791 [232/268] Linking target lib/librte_timer.so.24.1 00:04:46.791 [233/268] Linking target lib/librte_ring.so.24.1 00:04:46.791 [234/268] Linking target lib/librte_meter.so.24.1 00:04:46.791 [235/268] Linking target lib/librte_pci.so.24.1 00:04:46.791 [236/268] Linking target lib/librte_dmadev.so.24.1 00:04:46.791 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:46.791 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:46.791 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:46.791 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:46.791 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:46.791 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:46.791 [243/268] Linking target lib/librte_rcu.so.24.1 00:04:46.791 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:47.050 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:47.050 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:47.050 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:47.050 [248/268] Linking target lib/librte_mbuf.so.24.1 00:04:47.050 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:47.309 [250/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:47.309 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:47.309 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:47.309 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:47.309 [254/268] Linking static target lib/librte_vhost.a 00:04:47.310 [255/268] Linking target lib/librte_net.so.24.1 00:04:47.310 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:04:47.569 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:47.569 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:47.569 [259/268] Linking target lib/librte_hash.so.24.1 00:04:47.569 [260/268] Linking target lib/librte_cmdline.so.24.1 00:04:47.569 [261/268] Linking target lib/librte_security.so.24.1 00:04:47.827 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:48.085 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.344 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:48.344 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:48.608 [266/268] Linking target lib/librte_power.so.24.1 00:04:49.985 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.985 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:49.985 INFO: autodetecting backend as ninja 00:04:49.985 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:08.229 CC lib/log/log_flags.o 00:05:08.229 CC lib/log/log.o 00:05:08.229 CC lib/log/log_deprecated.o 00:05:08.229 CC lib/ut/ut.o 00:05:08.229 CC lib/ut_mock/mock.o 00:05:08.229 LIB libspdk_log.a 00:05:08.229 LIB libspdk_ut.a 00:05:08.229 LIB libspdk_ut_mock.a 00:05:08.229 SO libspdk_ut.so.2.0 00:05:08.229 SO libspdk_log.so.7.0 00:05:08.229 SO libspdk_ut_mock.so.6.0 00:05:08.229 SYMLINK libspdk_ut.so 00:05:08.229 SYMLINK libspdk_log.so 00:05:08.229 SYMLINK libspdk_ut_mock.so 00:05:08.229 CC lib/ioat/ioat.o 00:05:08.229 CXX lib/trace_parser/trace.o 00:05:08.229 CC lib/util/base64.o 00:05:08.229 CC lib/util/cpuset.o 00:05:08.229 CC lib/util/crc32.o 00:05:08.229 CC lib/util/crc16.o 00:05:08.229 CC lib/util/crc32c.o 00:05:08.229 CC lib/util/bit_array.o 00:05:08.229 CC lib/dma/dma.o 00:05:08.229 CC lib/vfio_user/host/vfio_user_pci.o 00:05:08.229 CC lib/vfio_user/host/vfio_user.o 00:05:08.229 CC lib/util/crc32_ieee.o 00:05:08.229 CC lib/util/crc64.o 00:05:08.229 CC lib/util/dif.o 00:05:08.229 CC lib/util/fd.o 00:05:08.229 LIB libspdk_dma.a 00:05:08.229 CC lib/util/fd_group.o 00:05:08.229 SO libspdk_dma.so.5.0 00:05:08.229 LIB libspdk_ioat.a 00:05:08.229 CC lib/util/file.o 00:05:08.229 SO libspdk_ioat.so.7.0 00:05:08.229 SYMLINK libspdk_dma.so 00:05:08.229 CC lib/util/hexlify.o 00:05:08.229 CC lib/util/iov.o 00:05:08.229 SYMLINK libspdk_ioat.so 00:05:08.229 CC lib/util/math.o 00:05:08.229 CC lib/util/net.o 00:05:08.229 CC lib/util/pipe.o 00:05:08.229 LIB libspdk_vfio_user.a 00:05:08.229 SO libspdk_vfio_user.so.5.0 00:05:08.229 CC lib/util/strerror_tls.o 00:05:08.229 CC lib/util/string.o 00:05:08.229 CC lib/util/uuid.o 00:05:08.229 SYMLINK libspdk_vfio_user.so 00:05:08.229 CC lib/util/xor.o 00:05:08.229 CC lib/util/zipf.o 00:05:08.229 CC lib/util/md5.o 00:05:08.229 LIB libspdk_util.a 00:05:08.229 SO libspdk_util.so.10.0 00:05:08.229 LIB libspdk_trace_parser.a 00:05:08.229 SO libspdk_trace_parser.so.6.0 00:05:08.229 SYMLINK libspdk_util.so 00:05:08.229 SYMLINK libspdk_trace_parser.so 00:05:08.229 CC lib/json/json_parse.o 00:05:08.229 CC lib/json/json_util.o 00:05:08.229 CC lib/json/json_write.o 00:05:08.229 CC lib/conf/conf.o 00:05:08.229 CC lib/rdma_utils/rdma_utils.o 00:05:08.229 CC lib/env_dpdk/env.o 00:05:08.229 CC lib/env_dpdk/memory.o 00:05:08.229 CC lib/idxd/idxd.o 00:05:08.229 CC lib/rdma_provider/common.o 00:05:08.229 CC lib/vmd/vmd.o 00:05:08.229 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:08.229 CC lib/vmd/led.o 00:05:08.229 CC lib/env_dpdk/pci.o 00:05:08.229 LIB libspdk_conf.a 00:05:08.229 SO libspdk_conf.so.6.0 00:05:08.229 LIB libspdk_rdma_utils.a 00:05:08.229 LIB libspdk_json.a 00:05:08.229 SO libspdk_rdma_utils.so.1.0 00:05:08.229 SO libspdk_json.so.6.0 00:05:08.229 SYMLINK libspdk_conf.so 00:05:08.229 CC lib/env_dpdk/init.o 00:05:08.229 SYMLINK libspdk_rdma_utils.so 00:05:08.229 CC lib/env_dpdk/threads.o 00:05:08.229 LIB libspdk_rdma_provider.a 00:05:08.229 CC lib/env_dpdk/pci_ioat.o 00:05:08.229 SO libspdk_rdma_provider.so.6.0 00:05:08.229 SYMLINK libspdk_json.so 00:05:08.229 CC lib/idxd/idxd_user.o 00:05:08.229 SYMLINK libspdk_rdma_provider.so 00:05:08.229 CC lib/env_dpdk/pci_virtio.o 00:05:08.229 CC lib/env_dpdk/pci_vmd.o 00:05:08.229 CC lib/env_dpdk/pci_idxd.o 00:05:08.229 CC lib/jsonrpc/jsonrpc_server.o 00:05:08.229 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:08.229 CC lib/jsonrpc/jsonrpc_client.o 00:05:08.229 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:08.229 CC lib/idxd/idxd_kernel.o 00:05:08.229 CC lib/env_dpdk/pci_event.o 00:05:08.229 LIB libspdk_vmd.a 00:05:08.229 CC lib/env_dpdk/sigbus_handler.o 00:05:08.229 CC lib/env_dpdk/pci_dpdk.o 00:05:08.229 SO libspdk_vmd.so.6.0 00:05:08.229 SYMLINK libspdk_vmd.so 00:05:08.229 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:08.229 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:08.229 LIB libspdk_idxd.a 00:05:08.229 SO libspdk_idxd.so.12.1 00:05:08.229 LIB libspdk_jsonrpc.a 00:05:08.229 SYMLINK libspdk_idxd.so 00:05:08.229 SO libspdk_jsonrpc.so.6.0 00:05:08.488 SYMLINK libspdk_jsonrpc.so 00:05:08.747 CC lib/rpc/rpc.o 00:05:09.016 LIB libspdk_env_dpdk.a 00:05:09.016 LIB libspdk_rpc.a 00:05:09.016 SO libspdk_env_dpdk.so.15.0 00:05:09.016 SO libspdk_rpc.so.6.0 00:05:09.016 SYMLINK libspdk_rpc.so 00:05:09.282 SYMLINK libspdk_env_dpdk.so 00:05:09.541 CC lib/trace/trace.o 00:05:09.541 CC lib/trace/trace_flags.o 00:05:09.541 CC lib/trace/trace_rpc.o 00:05:09.541 CC lib/keyring/keyring.o 00:05:09.541 CC lib/notify/notify.o 00:05:09.541 CC lib/keyring/keyring_rpc.o 00:05:09.541 CC lib/notify/notify_rpc.o 00:05:09.541 LIB libspdk_notify.a 00:05:09.799 SO libspdk_notify.so.6.0 00:05:09.799 LIB libspdk_keyring.a 00:05:09.799 LIB libspdk_trace.a 00:05:09.799 SYMLINK libspdk_notify.so 00:05:09.799 SO libspdk_keyring.so.2.0 00:05:09.799 SO libspdk_trace.so.11.0 00:05:09.799 SYMLINK libspdk_trace.so 00:05:09.799 SYMLINK libspdk_keyring.so 00:05:10.365 CC lib/thread/iobuf.o 00:05:10.365 CC lib/thread/thread.o 00:05:10.365 CC lib/sock/sock_rpc.o 00:05:10.365 CC lib/sock/sock.o 00:05:10.624 LIB libspdk_sock.a 00:05:10.624 SO libspdk_sock.so.10.0 00:05:10.883 SYMLINK libspdk_sock.so 00:05:11.150 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:11.150 CC lib/nvme/nvme_ctrlr.o 00:05:11.150 CC lib/nvme/nvme_fabric.o 00:05:11.150 CC lib/nvme/nvme_ns_cmd.o 00:05:11.150 CC lib/nvme/nvme_pcie.o 00:05:11.150 CC lib/nvme/nvme_ns.o 00:05:11.150 CC lib/nvme/nvme_pcie_common.o 00:05:11.150 CC lib/nvme/nvme_qpair.o 00:05:11.150 CC lib/nvme/nvme.o 00:05:11.736 LIB libspdk_thread.a 00:05:11.736 CC lib/nvme/nvme_quirks.o 00:05:11.736 SO libspdk_thread.so.10.1 00:05:11.736 CC lib/nvme/nvme_transport.o 00:05:11.995 CC lib/nvme/nvme_discovery.o 00:05:11.995 SYMLINK libspdk_thread.so 00:05:11.995 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:11.995 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:11.995 CC lib/nvme/nvme_tcp.o 00:05:11.995 CC lib/nvme/nvme_opal.o 00:05:12.253 CC lib/nvme/nvme_io_msg.o 00:05:12.253 CC lib/nvme/nvme_poll_group.o 00:05:12.511 CC lib/nvme/nvme_zns.o 00:05:12.511 CC lib/nvme/nvme_stubs.o 00:05:12.511 CC lib/nvme/nvme_auth.o 00:05:12.511 CC lib/accel/accel.o 00:05:12.511 CC lib/nvme/nvme_cuse.o 00:05:12.770 CC lib/nvme/nvme_rdma.o 00:05:12.770 CC lib/accel/accel_rpc.o 00:05:12.770 CC lib/blob/blobstore.o 00:05:13.029 CC lib/blob/request.o 00:05:13.029 CC lib/blob/zeroes.o 00:05:13.029 CC lib/accel/accel_sw.o 00:05:13.029 CC lib/blob/blob_bs_dev.o 00:05:13.287 CC lib/init/json_config.o 00:05:13.287 CC lib/init/subsystem.o 00:05:13.545 CC lib/init/subsystem_rpc.o 00:05:13.545 CC lib/init/rpc.o 00:05:13.545 CC lib/fsdev/fsdev.o 00:05:13.545 CC lib/virtio/virtio.o 00:05:13.545 CC lib/virtio/virtio_vhost_user.o 00:05:13.545 CC lib/virtio/virtio_vfio_user.o 00:05:13.545 CC lib/fsdev/fsdev_io.o 00:05:13.545 CC lib/fsdev/fsdev_rpc.o 00:05:13.545 LIB libspdk_accel.a 00:05:13.545 LIB libspdk_init.a 00:05:13.804 SO libspdk_accel.so.16.0 00:05:13.804 SO libspdk_init.so.6.0 00:05:13.804 CC lib/virtio/virtio_pci.o 00:05:13.804 SYMLINK libspdk_accel.so 00:05:13.804 SYMLINK libspdk_init.so 00:05:14.063 CC lib/event/log_rpc.o 00:05:14.063 CC lib/event/reactor.o 00:05:14.063 CC lib/event/scheduler_static.o 00:05:14.063 CC lib/event/app_rpc.o 00:05:14.063 CC lib/event/app.o 00:05:14.063 CC lib/bdev/bdev.o 00:05:14.063 LIB libspdk_nvme.a 00:05:14.063 LIB libspdk_virtio.a 00:05:14.063 SO libspdk_virtio.so.7.0 00:05:14.063 CC lib/bdev/bdev_rpc.o 00:05:14.063 CC lib/bdev/bdev_zone.o 00:05:14.063 LIB libspdk_fsdev.a 00:05:14.323 SYMLINK libspdk_virtio.so 00:05:14.323 CC lib/bdev/part.o 00:05:14.323 SO libspdk_nvme.so.14.0 00:05:14.323 SO libspdk_fsdev.so.1.0 00:05:14.323 CC lib/bdev/scsi_nvme.o 00:05:14.323 SYMLINK libspdk_fsdev.so 00:05:14.582 SYMLINK libspdk_nvme.so 00:05:14.582 LIB libspdk_event.a 00:05:14.582 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:14.582 SO libspdk_event.so.14.0 00:05:14.582 SYMLINK libspdk_event.so 00:05:15.151 LIB libspdk_fuse_dispatcher.a 00:05:15.409 SO libspdk_fuse_dispatcher.so.1.0 00:05:15.409 SYMLINK libspdk_fuse_dispatcher.so 00:05:16.344 LIB libspdk_blob.a 00:05:16.604 SO libspdk_blob.so.11.0 00:05:16.604 SYMLINK libspdk_blob.so 00:05:16.863 CC lib/lvol/lvol.o 00:05:17.122 CC lib/blobfs/tree.o 00:05:17.122 CC lib/blobfs/blobfs.o 00:05:17.122 LIB libspdk_bdev.a 00:05:17.122 SO libspdk_bdev.so.16.0 00:05:17.381 SYMLINK libspdk_bdev.so 00:05:17.381 CC lib/scsi/port.o 00:05:17.381 CC lib/scsi/dev.o 00:05:17.381 CC lib/scsi/scsi.o 00:05:17.381 CC lib/scsi/lun.o 00:05:17.381 CC lib/ublk/ublk.o 00:05:17.381 CC lib/ftl/ftl_core.o 00:05:17.640 CC lib/nvmf/ctrlr.o 00:05:17.640 CC lib/nbd/nbd.o 00:05:17.640 CC lib/nbd/nbd_rpc.o 00:05:17.640 CC lib/ftl/ftl_init.o 00:05:17.640 CC lib/scsi/scsi_bdev.o 00:05:17.899 CC lib/scsi/scsi_pr.o 00:05:17.899 CC lib/ublk/ublk_rpc.o 00:05:17.899 CC lib/ftl/ftl_layout.o 00:05:17.899 CC lib/ftl/ftl_debug.o 00:05:17.899 LIB libspdk_blobfs.a 00:05:17.899 LIB libspdk_nbd.a 00:05:17.899 SO libspdk_blobfs.so.10.0 00:05:17.899 LIB libspdk_lvol.a 00:05:17.899 SO libspdk_nbd.so.7.0 00:05:18.160 SO libspdk_lvol.so.10.0 00:05:18.160 CC lib/scsi/scsi_rpc.o 00:05:18.160 SYMLINK libspdk_blobfs.so 00:05:18.160 SYMLINK libspdk_nbd.so 00:05:18.160 CC lib/nvmf/ctrlr_discovery.o 00:05:18.160 CC lib/ftl/ftl_io.o 00:05:18.160 SYMLINK libspdk_lvol.so 00:05:18.160 CC lib/scsi/task.o 00:05:18.160 CC lib/ftl/ftl_sb.o 00:05:18.160 CC lib/nvmf/ctrlr_bdev.o 00:05:18.160 LIB libspdk_ublk.a 00:05:18.160 CC lib/nvmf/subsystem.o 00:05:18.160 SO libspdk_ublk.so.3.0 00:05:18.160 CC lib/nvmf/nvmf.o 00:05:18.160 SYMLINK libspdk_ublk.so 00:05:18.444 CC lib/ftl/ftl_l2p.o 00:05:18.444 CC lib/ftl/ftl_l2p_flat.o 00:05:18.444 CC lib/ftl/ftl_nv_cache.o 00:05:18.444 CC lib/ftl/ftl_band.o 00:05:18.444 LIB libspdk_scsi.a 00:05:18.444 SO libspdk_scsi.so.9.0 00:05:18.444 CC lib/ftl/ftl_band_ops.o 00:05:18.444 SYMLINK libspdk_scsi.so 00:05:18.444 CC lib/ftl/ftl_writer.o 00:05:18.444 CC lib/ftl/ftl_rq.o 00:05:18.703 CC lib/nvmf/nvmf_rpc.o 00:05:18.703 CC lib/ftl/ftl_reloc.o 00:05:18.703 CC lib/ftl/ftl_l2p_cache.o 00:05:18.703 CC lib/nvmf/transport.o 00:05:18.703 CC lib/nvmf/tcp.o 00:05:18.967 CC lib/iscsi/conn.o 00:05:18.968 CC lib/iscsi/init_grp.o 00:05:19.228 CC lib/iscsi/iscsi.o 00:05:19.228 CC lib/ftl/ftl_p2l.o 00:05:19.487 CC lib/ftl/ftl_p2l_log.o 00:05:19.487 CC lib/nvmf/stubs.o 00:05:19.487 CC lib/nvmf/mdns_server.o 00:05:19.487 CC lib/nvmf/rdma.o 00:05:19.746 CC lib/vhost/vhost.o 00:05:19.746 CC lib/vhost/vhost_rpc.o 00:05:19.746 CC lib/vhost/vhost_scsi.o 00:05:19.746 CC lib/vhost/vhost_blk.o 00:05:19.746 CC lib/ftl/mngt/ftl_mngt.o 00:05:20.004 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:20.004 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:20.004 CC lib/iscsi/param.o 00:05:20.004 CC lib/iscsi/portal_grp.o 00:05:20.263 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:20.263 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:20.263 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:20.263 CC lib/iscsi/tgt_node.o 00:05:20.528 CC lib/vhost/rte_vhost_user.o 00:05:20.528 CC lib/nvmf/auth.o 00:05:20.528 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:20.528 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:20.528 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:20.788 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:20.788 CC lib/iscsi/iscsi_subsystem.o 00:05:20.788 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:20.788 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:20.788 CC lib/iscsi/iscsi_rpc.o 00:05:20.788 CC lib/iscsi/task.o 00:05:20.788 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:20.788 CC lib/ftl/utils/ftl_conf.o 00:05:21.048 CC lib/ftl/utils/ftl_md.o 00:05:21.048 CC lib/ftl/utils/ftl_mempool.o 00:05:21.048 CC lib/ftl/utils/ftl_bitmap.o 00:05:21.048 CC lib/ftl/utils/ftl_property.o 00:05:21.048 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:21.307 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:21.307 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:21.307 LIB libspdk_iscsi.a 00:05:21.307 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:21.307 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:21.307 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:21.307 SO libspdk_iscsi.so.8.0 00:05:21.565 LIB libspdk_vhost.a 00:05:21.565 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:21.565 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:21.565 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:21.565 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:21.565 SO libspdk_vhost.so.8.0 00:05:21.565 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:21.565 SYMLINK libspdk_iscsi.so 00:05:21.565 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:21.565 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:21.565 CC lib/ftl/base/ftl_base_dev.o 00:05:21.565 SYMLINK libspdk_vhost.so 00:05:21.565 CC lib/ftl/base/ftl_base_bdev.o 00:05:21.565 CC lib/ftl/ftl_trace.o 00:05:21.825 LIB libspdk_ftl.a 00:05:22.084 LIB libspdk_nvmf.a 00:05:22.084 SO libspdk_ftl.so.9.0 00:05:22.342 SO libspdk_nvmf.so.19.0 00:05:22.342 SYMLINK libspdk_nvmf.so 00:05:22.602 SYMLINK libspdk_ftl.so 00:05:22.860 CC module/env_dpdk/env_dpdk_rpc.o 00:05:22.860 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:22.860 CC module/blob/bdev/blob_bdev.o 00:05:22.860 CC module/sock/posix/posix.o 00:05:22.860 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:22.860 CC module/fsdev/aio/fsdev_aio.o 00:05:22.860 CC module/accel/ioat/accel_ioat.o 00:05:22.860 CC module/scheduler/gscheduler/gscheduler.o 00:05:23.119 CC module/accel/error/accel_error.o 00:05:23.119 CC module/keyring/file/keyring.o 00:05:23.119 LIB libspdk_env_dpdk_rpc.a 00:05:23.119 SO libspdk_env_dpdk_rpc.so.6.0 00:05:23.119 SYMLINK libspdk_env_dpdk_rpc.so 00:05:23.119 LIB libspdk_scheduler_dpdk_governor.a 00:05:23.119 CC module/keyring/file/keyring_rpc.o 00:05:23.119 LIB libspdk_scheduler_gscheduler.a 00:05:23.119 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:23.119 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:23.119 SO libspdk_scheduler_gscheduler.so.4.0 00:05:23.119 CC module/accel/error/accel_error_rpc.o 00:05:23.119 LIB libspdk_scheduler_dynamic.a 00:05:23.119 CC module/accel/ioat/accel_ioat_rpc.o 00:05:23.119 SO libspdk_scheduler_dynamic.so.4.0 00:05:23.119 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:23.119 SYMLINK libspdk_scheduler_gscheduler.so 00:05:23.378 LIB libspdk_keyring_file.a 00:05:23.378 LIB libspdk_blob_bdev.a 00:05:23.378 SYMLINK libspdk_scheduler_dynamic.so 00:05:23.378 SO libspdk_blob_bdev.so.11.0 00:05:23.378 SO libspdk_keyring_file.so.2.0 00:05:23.378 LIB libspdk_accel_ioat.a 00:05:23.378 LIB libspdk_accel_error.a 00:05:23.378 CC module/fsdev/aio/linux_aio_mgr.o 00:05:23.378 SYMLINK libspdk_blob_bdev.so 00:05:23.378 SO libspdk_accel_error.so.2.0 00:05:23.378 SO libspdk_accel_ioat.so.6.0 00:05:23.378 SYMLINK libspdk_keyring_file.so 00:05:23.378 CC module/accel/iaa/accel_iaa.o 00:05:23.378 CC module/accel/dsa/accel_dsa.o 00:05:23.378 SYMLINK libspdk_accel_ioat.so 00:05:23.378 SYMLINK libspdk_accel_error.so 00:05:23.378 CC module/accel/dsa/accel_dsa_rpc.o 00:05:23.378 CC module/accel/iaa/accel_iaa_rpc.o 00:05:23.378 CC module/keyring/linux/keyring.o 00:05:23.637 CC module/keyring/linux/keyring_rpc.o 00:05:23.637 LIB libspdk_accel_iaa.a 00:05:23.637 CC module/bdev/delay/vbdev_delay.o 00:05:23.637 CC module/blobfs/bdev/blobfs_bdev.o 00:05:23.637 SO libspdk_accel_iaa.so.3.0 00:05:23.637 CC module/bdev/error/vbdev_error.o 00:05:23.637 LIB libspdk_accel_dsa.a 00:05:23.637 LIB libspdk_keyring_linux.a 00:05:23.637 SYMLINK libspdk_accel_iaa.so 00:05:23.637 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:23.637 CC module/bdev/gpt/gpt.o 00:05:23.637 SO libspdk_accel_dsa.so.5.0 00:05:23.637 LIB libspdk_fsdev_aio.a 00:05:23.637 SO libspdk_keyring_linux.so.1.0 00:05:23.637 CC module/bdev/lvol/vbdev_lvol.o 00:05:23.896 SO libspdk_fsdev_aio.so.1.0 00:05:23.896 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:23.896 LIB libspdk_sock_posix.a 00:05:23.896 SYMLINK libspdk_accel_dsa.so 00:05:23.896 CC module/bdev/gpt/vbdev_gpt.o 00:05:23.896 SYMLINK libspdk_keyring_linux.so 00:05:23.896 SO libspdk_sock_posix.so.6.0 00:05:23.896 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:23.896 SYMLINK libspdk_fsdev_aio.so 00:05:23.896 CC module/bdev/error/vbdev_error_rpc.o 00:05:23.896 SYMLINK libspdk_sock_posix.so 00:05:23.896 LIB libspdk_blobfs_bdev.a 00:05:23.896 SO libspdk_blobfs_bdev.so.6.0 00:05:23.896 LIB libspdk_bdev_delay.a 00:05:24.155 LIB libspdk_bdev_error.a 00:05:24.155 SO libspdk_bdev_delay.so.6.0 00:05:24.155 SO libspdk_bdev_error.so.6.0 00:05:24.155 CC module/bdev/malloc/bdev_malloc.o 00:05:24.155 SYMLINK libspdk_blobfs_bdev.so 00:05:24.155 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:24.155 CC module/bdev/null/bdev_null.o 00:05:24.155 LIB libspdk_bdev_gpt.a 00:05:24.155 CC module/bdev/nvme/bdev_nvme.o 00:05:24.155 SYMLINK libspdk_bdev_delay.so 00:05:24.155 SYMLINK libspdk_bdev_error.so 00:05:24.155 SO libspdk_bdev_gpt.so.6.0 00:05:24.155 CC module/bdev/passthru/vbdev_passthru.o 00:05:24.155 SYMLINK libspdk_bdev_gpt.so 00:05:24.155 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:24.414 CC module/bdev/raid/bdev_raid.o 00:05:24.414 LIB libspdk_bdev_lvol.a 00:05:24.414 CC module/bdev/split/vbdev_split.o 00:05:24.414 SO libspdk_bdev_lvol.so.6.0 00:05:24.414 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:24.414 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:24.414 CC module/bdev/null/bdev_null_rpc.o 00:05:24.414 SYMLINK libspdk_bdev_lvol.so 00:05:24.414 CC module/bdev/aio/bdev_aio.o 00:05:24.414 LIB libspdk_bdev_passthru.a 00:05:24.414 SO libspdk_bdev_passthru.so.6.0 00:05:24.414 LIB libspdk_bdev_malloc.a 00:05:24.414 SO libspdk_bdev_malloc.so.6.0 00:05:24.673 CC module/bdev/raid/bdev_raid_rpc.o 00:05:24.673 SYMLINK libspdk_bdev_passthru.so 00:05:24.673 CC module/bdev/split/vbdev_split_rpc.o 00:05:24.673 LIB libspdk_bdev_null.a 00:05:24.673 CC module/bdev/ftl/bdev_ftl.o 00:05:24.673 SYMLINK libspdk_bdev_malloc.so 00:05:24.673 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:24.673 SO libspdk_bdev_null.so.6.0 00:05:24.673 SYMLINK libspdk_bdev_null.so 00:05:24.673 CC module/bdev/iscsi/bdev_iscsi.o 00:05:24.673 LIB libspdk_bdev_zone_block.a 00:05:24.673 LIB libspdk_bdev_split.a 00:05:24.673 SO libspdk_bdev_zone_block.so.6.0 00:05:24.673 SO libspdk_bdev_split.so.6.0 00:05:24.673 CC module/bdev/aio/bdev_aio_rpc.o 00:05:24.932 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:24.932 SYMLINK libspdk_bdev_zone_block.so 00:05:24.932 SYMLINK libspdk_bdev_split.so 00:05:24.932 CC module/bdev/raid/bdev_raid_sb.o 00:05:24.932 CC module/bdev/raid/raid0.o 00:05:24.932 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:24.932 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:24.932 LIB libspdk_bdev_aio.a 00:05:24.932 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:24.932 SO libspdk_bdev_aio.so.6.0 00:05:25.222 SYMLINK libspdk_bdev_aio.so 00:05:25.222 CC module/bdev/nvme/nvme_rpc.o 00:05:25.222 CC module/bdev/raid/raid1.o 00:05:25.222 LIB libspdk_bdev_ftl.a 00:05:25.222 CC module/bdev/raid/concat.o 00:05:25.222 LIB libspdk_bdev_iscsi.a 00:05:25.222 SO libspdk_bdev_ftl.so.6.0 00:05:25.222 SO libspdk_bdev_iscsi.so.6.0 00:05:25.222 SYMLINK libspdk_bdev_ftl.so 00:05:25.222 SYMLINK libspdk_bdev_iscsi.so 00:05:25.222 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:25.222 CC module/bdev/raid/raid5f.o 00:05:25.222 CC module/bdev/nvme/bdev_mdns_client.o 00:05:25.222 CC module/bdev/nvme/vbdev_opal.o 00:05:25.222 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:25.481 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:25.481 LIB libspdk_bdev_virtio.a 00:05:25.481 SO libspdk_bdev_virtio.so.6.0 00:05:25.481 SYMLINK libspdk_bdev_virtio.so 00:05:25.740 LIB libspdk_bdev_raid.a 00:05:25.999 SO libspdk_bdev_raid.so.6.0 00:05:25.999 SYMLINK libspdk_bdev_raid.so 00:05:26.567 LIB libspdk_bdev_nvme.a 00:05:26.826 SO libspdk_bdev_nvme.so.7.0 00:05:26.826 SYMLINK libspdk_bdev_nvme.so 00:05:27.396 CC module/event/subsystems/sock/sock.o 00:05:27.396 CC module/event/subsystems/iobuf/iobuf.o 00:05:27.396 CC module/event/subsystems/keyring/keyring.o 00:05:27.396 CC module/event/subsystems/vmd/vmd.o 00:05:27.396 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:27.396 CC module/event/subsystems/fsdev/fsdev.o 00:05:27.396 CC module/event/subsystems/scheduler/scheduler.o 00:05:27.396 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:27.396 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:27.655 LIB libspdk_event_sock.a 00:05:27.655 LIB libspdk_event_fsdev.a 00:05:27.655 LIB libspdk_event_vmd.a 00:05:27.655 LIB libspdk_event_keyring.a 00:05:27.655 LIB libspdk_event_iobuf.a 00:05:27.655 LIB libspdk_event_scheduler.a 00:05:27.655 SO libspdk_event_sock.so.5.0 00:05:27.655 SO libspdk_event_fsdev.so.1.0 00:05:27.655 SO libspdk_event_vmd.so.6.0 00:05:27.655 SO libspdk_event_keyring.so.1.0 00:05:27.655 SO libspdk_event_iobuf.so.3.0 00:05:27.655 SO libspdk_event_scheduler.so.4.0 00:05:27.655 LIB libspdk_event_vhost_blk.a 00:05:27.655 SO libspdk_event_vhost_blk.so.3.0 00:05:27.655 SYMLINK libspdk_event_fsdev.so 00:05:27.655 SYMLINK libspdk_event_sock.so 00:05:27.655 SYMLINK libspdk_event_keyring.so 00:05:27.655 SYMLINK libspdk_event_vmd.so 00:05:27.655 SYMLINK libspdk_event_iobuf.so 00:05:27.655 SYMLINK libspdk_event_scheduler.so 00:05:27.655 SYMLINK libspdk_event_vhost_blk.so 00:05:28.223 CC module/event/subsystems/accel/accel.o 00:05:28.223 LIB libspdk_event_accel.a 00:05:28.223 SO libspdk_event_accel.so.6.0 00:05:28.223 SYMLINK libspdk_event_accel.so 00:05:28.832 CC module/event/subsystems/bdev/bdev.o 00:05:28.832 LIB libspdk_event_bdev.a 00:05:28.832 SO libspdk_event_bdev.so.6.0 00:05:29.090 SYMLINK libspdk_event_bdev.so 00:05:29.348 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:29.348 CC module/event/subsystems/nbd/nbd.o 00:05:29.348 CC module/event/subsystems/scsi/scsi.o 00:05:29.348 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:29.348 CC module/event/subsystems/ublk/ublk.o 00:05:29.607 LIB libspdk_event_nbd.a 00:05:29.607 LIB libspdk_event_scsi.a 00:05:29.607 LIB libspdk_event_ublk.a 00:05:29.607 SO libspdk_event_nbd.so.6.0 00:05:29.607 SO libspdk_event_ublk.so.3.0 00:05:29.607 SO libspdk_event_scsi.so.6.0 00:05:29.607 LIB libspdk_event_nvmf.a 00:05:29.607 SYMLINK libspdk_event_scsi.so 00:05:29.607 SYMLINK libspdk_event_nbd.so 00:05:29.607 SYMLINK libspdk_event_ublk.so 00:05:29.607 SO libspdk_event_nvmf.so.6.0 00:05:29.866 SYMLINK libspdk_event_nvmf.so 00:05:29.866 CC module/event/subsystems/iscsi/iscsi.o 00:05:30.126 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:30.126 LIB libspdk_event_vhost_scsi.a 00:05:30.126 LIB libspdk_event_iscsi.a 00:05:30.126 SO libspdk_event_vhost_scsi.so.3.0 00:05:30.126 SO libspdk_event_iscsi.so.6.0 00:05:30.385 SYMLINK libspdk_event_vhost_scsi.so 00:05:30.385 SYMLINK libspdk_event_iscsi.so 00:05:30.644 SO libspdk.so.6.0 00:05:30.644 SYMLINK libspdk.so 00:05:30.904 CC app/trace_record/trace_record.o 00:05:30.904 CC app/spdk_lspci/spdk_lspci.o 00:05:30.904 CC app/spdk_nvme_perf/perf.o 00:05:30.904 CXX app/trace/trace.o 00:05:30.904 CC app/spdk_nvme_identify/identify.o 00:05:30.904 CC app/nvmf_tgt/nvmf_main.o 00:05:30.904 CC app/spdk_tgt/spdk_tgt.o 00:05:30.904 CC examples/util/zipf/zipf.o 00:05:30.904 CC app/iscsi_tgt/iscsi_tgt.o 00:05:30.904 CC test/thread/poller_perf/poller_perf.o 00:05:30.904 LINK spdk_lspci 00:05:31.163 LINK nvmf_tgt 00:05:31.163 LINK zipf 00:05:31.163 LINK poller_perf 00:05:31.163 LINK spdk_trace_record 00:05:31.163 LINK spdk_tgt 00:05:31.163 LINK iscsi_tgt 00:05:31.163 CC app/spdk_nvme_discover/discovery_aer.o 00:05:31.163 LINK spdk_trace 00:05:31.422 CC app/spdk_top/spdk_top.o 00:05:31.422 CC examples/vmd/lsvmd/lsvmd.o 00:05:31.422 CC examples/ioat/perf/perf.o 00:05:31.422 LINK spdk_nvme_discover 00:05:31.422 CC test/dma/test_dma/test_dma.o 00:05:31.422 CC examples/idxd/perf/perf.o 00:05:31.422 CC app/spdk_dd/spdk_dd.o 00:05:31.422 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:31.422 LINK lsvmd 00:05:31.681 LINK ioat_perf 00:05:31.681 LINK spdk_nvme_perf 00:05:31.681 LINK interrupt_tgt 00:05:31.681 LINK spdk_nvme_identify 00:05:31.681 LINK idxd_perf 00:05:31.940 CC examples/vmd/led/led.o 00:05:31.940 LINK spdk_dd 00:05:31.940 CC app/fio/nvme/fio_plugin.o 00:05:31.940 CC examples/ioat/verify/verify.o 00:05:31.940 LINK test_dma 00:05:31.940 LINK led 00:05:32.199 CC app/fio/bdev/fio_plugin.o 00:05:32.199 CC app/vhost/vhost.o 00:05:32.199 CC examples/thread/thread/thread_ex.o 00:05:32.199 LINK verify 00:05:32.199 CC test/app/bdev_svc/bdev_svc.o 00:05:32.199 TEST_HEADER include/spdk/accel.h 00:05:32.199 TEST_HEADER include/spdk/accel_module.h 00:05:32.199 TEST_HEADER include/spdk/assert.h 00:05:32.199 TEST_HEADER include/spdk/barrier.h 00:05:32.199 TEST_HEADER include/spdk/base64.h 00:05:32.199 TEST_HEADER include/spdk/bdev.h 00:05:32.199 TEST_HEADER include/spdk/bdev_module.h 00:05:32.199 TEST_HEADER include/spdk/bdev_zone.h 00:05:32.199 TEST_HEADER include/spdk/bit_array.h 00:05:32.199 TEST_HEADER include/spdk/bit_pool.h 00:05:32.199 TEST_HEADER include/spdk/blob_bdev.h 00:05:32.199 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:32.199 LINK vhost 00:05:32.199 TEST_HEADER include/spdk/blobfs.h 00:05:32.199 TEST_HEADER include/spdk/blob.h 00:05:32.199 TEST_HEADER include/spdk/conf.h 00:05:32.199 TEST_HEADER include/spdk/config.h 00:05:32.199 TEST_HEADER include/spdk/cpuset.h 00:05:32.199 TEST_HEADER include/spdk/crc16.h 00:05:32.199 TEST_HEADER include/spdk/crc32.h 00:05:32.458 TEST_HEADER include/spdk/crc64.h 00:05:32.458 TEST_HEADER include/spdk/dif.h 00:05:32.458 TEST_HEADER include/spdk/dma.h 00:05:32.458 TEST_HEADER include/spdk/endian.h 00:05:32.458 TEST_HEADER include/spdk/env_dpdk.h 00:05:32.458 TEST_HEADER include/spdk/env.h 00:05:32.458 TEST_HEADER include/spdk/event.h 00:05:32.458 TEST_HEADER include/spdk/fd_group.h 00:05:32.458 TEST_HEADER include/spdk/fd.h 00:05:32.458 TEST_HEADER include/spdk/file.h 00:05:32.458 TEST_HEADER include/spdk/fsdev.h 00:05:32.458 TEST_HEADER include/spdk/fsdev_module.h 00:05:32.458 TEST_HEADER include/spdk/ftl.h 00:05:32.458 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:32.458 TEST_HEADER include/spdk/gpt_spec.h 00:05:32.458 TEST_HEADER include/spdk/hexlify.h 00:05:32.458 TEST_HEADER include/spdk/histogram_data.h 00:05:32.458 TEST_HEADER include/spdk/idxd.h 00:05:32.458 TEST_HEADER include/spdk/idxd_spec.h 00:05:32.458 TEST_HEADER include/spdk/init.h 00:05:32.458 TEST_HEADER include/spdk/ioat.h 00:05:32.458 TEST_HEADER include/spdk/ioat_spec.h 00:05:32.458 TEST_HEADER include/spdk/iscsi_spec.h 00:05:32.458 TEST_HEADER include/spdk/json.h 00:05:32.458 TEST_HEADER include/spdk/jsonrpc.h 00:05:32.458 TEST_HEADER include/spdk/keyring.h 00:05:32.458 TEST_HEADER include/spdk/keyring_module.h 00:05:32.458 TEST_HEADER include/spdk/likely.h 00:05:32.458 TEST_HEADER include/spdk/log.h 00:05:32.458 TEST_HEADER include/spdk/lvol.h 00:05:32.458 TEST_HEADER include/spdk/md5.h 00:05:32.458 TEST_HEADER include/spdk/memory.h 00:05:32.458 CC examples/sock/hello_world/hello_sock.o 00:05:32.458 TEST_HEADER include/spdk/mmio.h 00:05:32.458 TEST_HEADER include/spdk/nbd.h 00:05:32.458 TEST_HEADER include/spdk/net.h 00:05:32.458 TEST_HEADER include/spdk/notify.h 00:05:32.458 LINK bdev_svc 00:05:32.458 TEST_HEADER include/spdk/nvme.h 00:05:32.458 TEST_HEADER include/spdk/nvme_intel.h 00:05:32.458 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:32.458 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:32.458 TEST_HEADER include/spdk/nvme_spec.h 00:05:32.458 TEST_HEADER include/spdk/nvme_zns.h 00:05:32.458 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:32.458 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:32.458 TEST_HEADER include/spdk/nvmf.h 00:05:32.458 TEST_HEADER include/spdk/nvmf_spec.h 00:05:32.458 TEST_HEADER include/spdk/nvmf_transport.h 00:05:32.458 TEST_HEADER include/spdk/opal.h 00:05:32.458 LINK spdk_top 00:05:32.458 TEST_HEADER include/spdk/opal_spec.h 00:05:32.458 TEST_HEADER include/spdk/pci_ids.h 00:05:32.458 TEST_HEADER include/spdk/pipe.h 00:05:32.458 TEST_HEADER include/spdk/queue.h 00:05:32.458 TEST_HEADER include/spdk/reduce.h 00:05:32.458 TEST_HEADER include/spdk/rpc.h 00:05:32.458 LINK thread 00:05:32.458 TEST_HEADER include/spdk/scheduler.h 00:05:32.458 TEST_HEADER include/spdk/scsi.h 00:05:32.458 TEST_HEADER include/spdk/scsi_spec.h 00:05:32.458 TEST_HEADER include/spdk/sock.h 00:05:32.458 TEST_HEADER include/spdk/stdinc.h 00:05:32.458 TEST_HEADER include/spdk/string.h 00:05:32.458 TEST_HEADER include/spdk/thread.h 00:05:32.458 TEST_HEADER include/spdk/trace.h 00:05:32.458 TEST_HEADER include/spdk/trace_parser.h 00:05:32.458 TEST_HEADER include/spdk/tree.h 00:05:32.458 TEST_HEADER include/spdk/ublk.h 00:05:32.458 TEST_HEADER include/spdk/util.h 00:05:32.458 TEST_HEADER include/spdk/uuid.h 00:05:32.458 TEST_HEADER include/spdk/version.h 00:05:32.458 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:32.458 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:32.458 TEST_HEADER include/spdk/vhost.h 00:05:32.458 TEST_HEADER include/spdk/vmd.h 00:05:32.458 TEST_HEADER include/spdk/xor.h 00:05:32.458 TEST_HEADER include/spdk/zipf.h 00:05:32.458 CXX test/cpp_headers/accel.o 00:05:32.458 CC test/env/mem_callbacks/mem_callbacks.o 00:05:32.458 CXX test/cpp_headers/accel_module.o 00:05:32.458 LINK spdk_nvme 00:05:32.458 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:32.717 LINK spdk_bdev 00:05:32.717 CXX test/cpp_headers/assert.o 00:05:32.717 CXX test/cpp_headers/barrier.o 00:05:32.717 LINK hello_sock 00:05:32.717 CC test/env/vtophys/vtophys.o 00:05:32.717 CXX test/cpp_headers/base64.o 00:05:32.717 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:32.717 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:32.717 CXX test/cpp_headers/bdev.o 00:05:32.717 CXX test/cpp_headers/bdev_module.o 00:05:32.717 CXX test/cpp_headers/bdev_zone.o 00:05:32.975 LINK vtophys 00:05:32.976 CC examples/accel/perf/accel_perf.o 00:05:32.976 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:32.976 CXX test/cpp_headers/bit_array.o 00:05:32.976 CC test/app/histogram_perf/histogram_perf.o 00:05:32.976 CXX test/cpp_headers/bit_pool.o 00:05:32.976 LINK nvme_fuzz 00:05:32.976 CXX test/cpp_headers/blob_bdev.o 00:05:32.976 LINK mem_callbacks 00:05:33.234 LINK histogram_perf 00:05:33.234 CXX test/cpp_headers/blobfs_bdev.o 00:05:33.234 CXX test/cpp_headers/blobfs.o 00:05:33.234 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:33.234 CC examples/blob/hello_world/hello_blob.o 00:05:33.234 CC test/app/jsoncat/jsoncat.o 00:05:33.234 CC examples/blob/cli/blobcli.o 00:05:33.234 CXX test/cpp_headers/blob.o 00:05:33.505 LINK vhost_fuzz 00:05:33.505 LINK accel_perf 00:05:33.505 LINK env_dpdk_post_init 00:05:33.505 LINK jsoncat 00:05:33.505 CC test/env/memory/memory_ut.o 00:05:33.505 CXX test/cpp_headers/conf.o 00:05:33.505 LINK hello_blob 00:05:33.505 CC test/event/event_perf/event_perf.o 00:05:33.764 CC test/event/reactor/reactor.o 00:05:33.764 CXX test/cpp_headers/config.o 00:05:33.764 CC test/event/app_repeat/app_repeat.o 00:05:33.764 LINK event_perf 00:05:33.764 CC test/event/reactor_perf/reactor_perf.o 00:05:33.764 CXX test/cpp_headers/cpuset.o 00:05:33.764 CXX test/cpp_headers/crc16.o 00:05:33.764 CC test/event/scheduler/scheduler.o 00:05:33.764 LINK reactor 00:05:33.764 LINK blobcli 00:05:33.764 LINK app_repeat 00:05:33.764 LINK reactor_perf 00:05:34.022 CXX test/cpp_headers/crc32.o 00:05:34.022 CC test/env/pci/pci_ut.o 00:05:34.022 CXX test/cpp_headers/crc64.o 00:05:34.022 CC test/nvme/aer/aer.o 00:05:34.022 LINK scheduler 00:05:34.022 CC test/nvme/reset/reset.o 00:05:34.022 CC test/rpc_client/rpc_client_test.o 00:05:34.280 CXX test/cpp_headers/dif.o 00:05:34.280 CC examples/nvme/hello_world/hello_world.o 00:05:34.280 CC test/accel/dif/dif.o 00:05:34.280 CXX test/cpp_headers/dma.o 00:05:34.280 LINK rpc_client_test 00:05:34.280 LINK aer 00:05:34.538 LINK reset 00:05:34.538 LINK pci_ut 00:05:34.538 CXX test/cpp_headers/endian.o 00:05:34.538 CXX test/cpp_headers/env_dpdk.o 00:05:34.538 LINK hello_world 00:05:34.538 CXX test/cpp_headers/env.o 00:05:34.538 CC test/blobfs/mkfs/mkfs.o 00:05:34.538 LINK memory_ut 00:05:34.796 LINK iscsi_fuzz 00:05:34.796 CC test/nvme/sgl/sgl.o 00:05:34.796 CC test/nvme/e2edp/nvme_dp.o 00:05:34.796 CXX test/cpp_headers/event.o 00:05:34.796 CC examples/nvme/reconnect/reconnect.o 00:05:34.796 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:34.796 LINK mkfs 00:05:34.796 CXX test/cpp_headers/fd_group.o 00:05:34.796 CC test/lvol/esnap/esnap.o 00:05:35.055 CC examples/nvme/arbitration/arbitration.o 00:05:35.055 CC test/app/stub/stub.o 00:05:35.055 CXX test/cpp_headers/fd.o 00:05:35.055 LINK sgl 00:05:35.055 LINK dif 00:05:35.055 LINK nvme_dp 00:05:35.055 CC examples/nvme/hotplug/hotplug.o 00:05:35.055 LINK reconnect 00:05:35.055 CXX test/cpp_headers/file.o 00:05:35.313 LINK stub 00:05:35.313 CC test/nvme/overhead/overhead.o 00:05:35.313 CXX test/cpp_headers/fsdev.o 00:05:35.313 LINK arbitration 00:05:35.313 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:35.313 LINK hotplug 00:05:35.313 CC test/nvme/err_injection/err_injection.o 00:05:35.313 LINK nvme_manage 00:05:35.313 CC test/nvme/startup/startup.o 00:05:35.571 CXX test/cpp_headers/fsdev_module.o 00:05:35.571 LINK cmb_copy 00:05:35.571 LINK err_injection 00:05:35.571 CC examples/nvme/abort/abort.o 00:05:35.571 LINK overhead 00:05:35.571 CXX test/cpp_headers/ftl.o 00:05:35.571 CC test/nvme/reserve/reserve.o 00:05:35.571 LINK startup 00:05:35.829 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:35.829 CC test/bdev/bdevio/bdevio.o 00:05:35.829 CC test/nvme/simple_copy/simple_copy.o 00:05:35.829 CC test/nvme/connect_stress/connect_stress.o 00:05:35.829 CXX test/cpp_headers/fuse_dispatcher.o 00:05:35.829 LINK reserve 00:05:35.829 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:35.829 CC test/nvme/boot_partition/boot_partition.o 00:05:36.087 CXX test/cpp_headers/gpt_spec.o 00:05:36.087 LINK connect_stress 00:05:36.087 LINK hello_fsdev 00:05:36.087 LINK simple_copy 00:05:36.087 LINK boot_partition 00:05:36.087 LINK pmr_persistence 00:05:36.087 LINK abort 00:05:36.087 CC test/nvme/compliance/nvme_compliance.o 00:05:36.087 CXX test/cpp_headers/hexlify.o 00:05:36.346 LINK bdevio 00:05:36.346 CXX test/cpp_headers/histogram_data.o 00:05:36.346 CC test/nvme/fused_ordering/fused_ordering.o 00:05:36.346 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:36.346 CC test/nvme/fdp/fdp.o 00:05:36.346 CXX test/cpp_headers/idxd.o 00:05:36.346 CC test/nvme/cuse/cuse.o 00:05:36.605 CXX test/cpp_headers/idxd_spec.o 00:05:36.605 CC examples/bdev/hello_world/hello_bdev.o 00:05:36.605 LINK fused_ordering 00:05:36.605 LINK nvme_compliance 00:05:36.605 CXX test/cpp_headers/init.o 00:05:36.605 LINK doorbell_aers 00:05:36.605 CC examples/bdev/bdevperf/bdevperf.o 00:05:36.605 CXX test/cpp_headers/ioat.o 00:05:36.605 CXX test/cpp_headers/ioat_spec.o 00:05:36.605 CXX test/cpp_headers/iscsi_spec.o 00:05:36.605 LINK fdp 00:05:36.605 CXX test/cpp_headers/json.o 00:05:36.863 LINK hello_bdev 00:05:36.863 CXX test/cpp_headers/jsonrpc.o 00:05:36.863 CXX test/cpp_headers/keyring.o 00:05:36.863 CXX test/cpp_headers/keyring_module.o 00:05:36.863 CXX test/cpp_headers/likely.o 00:05:36.863 CXX test/cpp_headers/log.o 00:05:36.863 CXX test/cpp_headers/lvol.o 00:05:36.863 CXX test/cpp_headers/md5.o 00:05:36.863 CXX test/cpp_headers/memory.o 00:05:37.122 CXX test/cpp_headers/mmio.o 00:05:37.122 CXX test/cpp_headers/nbd.o 00:05:37.122 CXX test/cpp_headers/net.o 00:05:37.122 CXX test/cpp_headers/notify.o 00:05:37.122 CXX test/cpp_headers/nvme.o 00:05:37.122 CXX test/cpp_headers/nvme_intel.o 00:05:37.122 CXX test/cpp_headers/nvme_ocssd.o 00:05:37.122 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:37.122 CXX test/cpp_headers/nvme_spec.o 00:05:37.122 CXX test/cpp_headers/nvme_zns.o 00:05:37.381 CXX test/cpp_headers/nvmf_cmd.o 00:05:37.381 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:37.381 CXX test/cpp_headers/nvmf.o 00:05:37.381 CXX test/cpp_headers/nvmf_spec.o 00:05:37.381 CXX test/cpp_headers/nvmf_transport.o 00:05:37.381 CXX test/cpp_headers/opal.o 00:05:37.381 CXX test/cpp_headers/opal_spec.o 00:05:37.381 CXX test/cpp_headers/pci_ids.o 00:05:37.381 CXX test/cpp_headers/pipe.o 00:05:37.381 CXX test/cpp_headers/queue.o 00:05:37.640 CXX test/cpp_headers/reduce.o 00:05:37.640 CXX test/cpp_headers/rpc.o 00:05:37.640 LINK bdevperf 00:05:37.640 CXX test/cpp_headers/scheduler.o 00:05:37.640 CXX test/cpp_headers/scsi.o 00:05:37.640 CXX test/cpp_headers/scsi_spec.o 00:05:37.640 CXX test/cpp_headers/sock.o 00:05:37.640 CXX test/cpp_headers/stdinc.o 00:05:37.640 CXX test/cpp_headers/string.o 00:05:37.897 CXX test/cpp_headers/thread.o 00:05:37.897 CXX test/cpp_headers/trace.o 00:05:37.897 CXX test/cpp_headers/trace_parser.o 00:05:37.897 LINK cuse 00:05:37.897 CXX test/cpp_headers/tree.o 00:05:37.897 CXX test/cpp_headers/ublk.o 00:05:37.897 CXX test/cpp_headers/util.o 00:05:37.897 CXX test/cpp_headers/uuid.o 00:05:37.897 CXX test/cpp_headers/version.o 00:05:37.897 CXX test/cpp_headers/vfio_user_pci.o 00:05:37.897 CXX test/cpp_headers/vfio_user_spec.o 00:05:37.897 CXX test/cpp_headers/vhost.o 00:05:38.154 CXX test/cpp_headers/vmd.o 00:05:38.154 CXX test/cpp_headers/xor.o 00:05:38.154 CC examples/nvmf/nvmf/nvmf.o 00:05:38.154 CXX test/cpp_headers/zipf.o 00:05:38.411 LINK nvmf 00:05:41.696 LINK esnap 00:05:41.696 00:05:41.696 real 1m24.465s 00:05:41.696 user 7m26.561s 00:05:41.696 sys 1m52.693s 00:05:41.696 13:39:51 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:41.696 ************************************ 00:05:41.696 13:39:51 make -- common/autotest_common.sh@10 -- $ set +x 00:05:41.696 END TEST make 00:05:41.696 ************************************ 00:05:41.696 13:39:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:41.696 13:39:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:41.696 13:39:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:41.696 13:39:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.696 13:39:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:41.696 13:39:51 -- pm/common@44 -- $ pid=5232 00:05:41.696 13:39:51 -- pm/common@50 -- $ kill -TERM 5232 00:05:41.696 13:39:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.696 13:39:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:41.696 13:39:51 -- pm/common@44 -- $ pid=5234 00:05:41.696 13:39:51 -- pm/common@50 -- $ kill -TERM 5234 00:05:41.955 13:39:51 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.955 13:39:51 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.955 13:39:51 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.955 13:39:51 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.955 13:39:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.955 13:39:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.955 13:39:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.955 13:39:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.955 13:39:51 -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.955 13:39:51 -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.955 13:39:51 -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.956 13:39:51 -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.956 13:39:51 -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.956 13:39:51 -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.956 13:39:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.956 13:39:51 -- scripts/common.sh@344 -- # case "$op" in 00:05:41.956 13:39:51 -- scripts/common.sh@345 -- # : 1 00:05:41.956 13:39:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.956 13:39:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.956 13:39:51 -- scripts/common.sh@365 -- # decimal 1 00:05:41.956 13:39:51 -- scripts/common.sh@353 -- # local d=1 00:05:41.956 13:39:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.956 13:39:51 -- scripts/common.sh@355 -- # echo 1 00:05:41.956 13:39:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.956 13:39:51 -- scripts/common.sh@366 -- # decimal 2 00:05:41.956 13:39:51 -- scripts/common.sh@353 -- # local d=2 00:05:41.956 13:39:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.956 13:39:51 -- scripts/common.sh@355 -- # echo 2 00:05:41.956 13:39:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.956 13:39:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.956 13:39:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.956 13:39:51 -- scripts/common.sh@368 -- # return 0 00:05:41.956 13:39:51 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.956 13:39:51 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.956 --rc genhtml_branch_coverage=1 00:05:41.956 --rc genhtml_function_coverage=1 00:05:41.956 --rc genhtml_legend=1 00:05:41.956 --rc geninfo_all_blocks=1 00:05:41.956 --rc geninfo_unexecuted_blocks=1 00:05:41.956 00:05:41.956 ' 00:05:41.956 13:39:51 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.956 --rc genhtml_branch_coverage=1 00:05:41.956 --rc genhtml_function_coverage=1 00:05:41.956 --rc genhtml_legend=1 00:05:41.956 --rc geninfo_all_blocks=1 00:05:41.956 --rc geninfo_unexecuted_blocks=1 00:05:41.956 00:05:41.956 ' 00:05:41.956 13:39:51 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.956 --rc genhtml_branch_coverage=1 00:05:41.956 --rc genhtml_function_coverage=1 00:05:41.956 --rc genhtml_legend=1 00:05:41.956 --rc geninfo_all_blocks=1 00:05:41.956 --rc geninfo_unexecuted_blocks=1 00:05:41.956 00:05:41.956 ' 00:05:41.956 13:39:51 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.956 --rc genhtml_branch_coverage=1 00:05:41.956 --rc genhtml_function_coverage=1 00:05:41.956 --rc genhtml_legend=1 00:05:41.956 --rc geninfo_all_blocks=1 00:05:41.956 --rc geninfo_unexecuted_blocks=1 00:05:41.956 00:05:41.956 ' 00:05:41.956 13:39:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.956 13:39:51 -- nvmf/common.sh@7 -- # uname -s 00:05:41.956 13:39:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.956 13:39:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.956 13:39:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.956 13:39:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.956 13:39:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.956 13:39:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.956 13:39:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.956 13:39:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.956 13:39:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.956 13:39:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.956 13:39:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:70e69b5c-9e77-4517-915c-f036209d1fdb 00:05:41.956 13:39:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=70e69b5c-9e77-4517-915c-f036209d1fdb 00:05:41.956 13:39:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.956 13:39:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.956 13:39:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.956 13:39:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.956 13:39:52 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.956 13:39:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.956 13:39:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.956 13:39:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.956 13:39:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.956 13:39:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.956 13:39:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.956 13:39:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.956 13:39:52 -- paths/export.sh@5 -- # export PATH 00:05:41.956 13:39:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.956 13:39:52 -- nvmf/common.sh@51 -- # : 0 00:05:41.956 13:39:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.956 13:39:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.956 13:39:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.956 13:39:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.956 13:39:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.956 13:39:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.956 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.956 13:39:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.956 13:39:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.956 13:39:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.956 13:39:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:41.956 13:39:52 -- spdk/autotest.sh@32 -- # uname -s 00:05:41.956 13:39:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:41.956 13:39:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:41.956 13:39:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:41.956 13:39:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:41.956 13:39:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:41.956 13:39:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:41.956 13:39:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:41.956 13:39:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:41.956 13:39:52 -- spdk/autotest.sh@48 -- # udevadm_pid=54177 00:05:41.956 13:39:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:41.956 13:39:52 -- pm/common@17 -- # local monitor 00:05:41.956 13:39:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.956 13:39:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:41.956 13:39:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.956 13:39:52 -- pm/common@21 -- # date +%s 00:05:41.956 13:39:52 -- pm/common@25 -- # sleep 1 00:05:41.956 13:39:52 -- pm/common@21 -- # date +%s 00:05:41.956 13:39:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727789992 00:05:41.956 13:39:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727789992 00:05:41.956 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727789992_collect-cpu-load.pm.log 00:05:41.956 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727789992_collect-vmstat.pm.log 00:05:43.333 13:39:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:43.333 13:39:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:43.333 13:39:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.333 13:39:53 -- common/autotest_common.sh@10 -- # set +x 00:05:43.333 13:39:53 -- spdk/autotest.sh@59 -- # create_test_list 00:05:43.333 13:39:53 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:43.333 13:39:53 -- common/autotest_common.sh@10 -- # set +x 00:05:43.333 13:39:53 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:43.333 13:39:53 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:43.333 13:39:53 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:43.333 13:39:53 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:43.333 13:39:53 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:43.333 13:39:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:43.333 13:39:53 -- common/autotest_common.sh@1455 -- # uname 00:05:43.333 13:39:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:43.333 13:39:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:43.333 13:39:53 -- common/autotest_common.sh@1475 -- # uname 00:05:43.333 13:39:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:43.333 13:39:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:43.333 13:39:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:43.333 lcov: LCOV version 1.15 00:05:43.333 13:39:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:01.424 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:01.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:16.306 13:40:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:16.306 13:40:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.306 13:40:25 -- common/autotest_common.sh@10 -- # set +x 00:06:16.306 13:40:25 -- spdk/autotest.sh@78 -- # rm -f 00:06:16.306 13:40:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:16.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.306 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:16.306 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:16.306 13:40:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:16.306 13:40:26 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:16.306 13:40:26 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:16.306 13:40:26 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:16.306 13:40:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:16.306 13:40:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:16.306 13:40:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:16.306 13:40:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:16.306 13:40:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:16.306 13:40:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:16.306 13:40:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:16.306 13:40:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:16.306 13:40:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:16.306 13:40:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:16.306 13:40:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:16.306 13:40:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:16.306 13:40:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:16.306 13:40:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:16.306 13:40:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:16.306 13:40:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:16.306 13:40:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:16.306 13:40:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:16.306 13:40:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:16.306 13:40:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:16.306 13:40:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:16.306 13:40:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:16.306 13:40:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:16.306 13:40:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:16.306 13:40:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:16.306 13:40:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:16.306 No valid GPT data, bailing 00:06:16.306 13:40:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:16.306 13:40:26 -- scripts/common.sh@394 -- # pt= 00:06:16.306 13:40:26 -- scripts/common.sh@395 -- # return 1 00:06:16.306 13:40:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:16.306 1+0 records in 00:06:16.306 1+0 records out 00:06:16.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629893 s, 166 MB/s 00:06:16.306 13:40:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:16.306 13:40:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:16.306 13:40:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:16.306 13:40:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:16.306 13:40:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:16.306 No valid GPT data, bailing 00:06:16.306 13:40:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:16.306 13:40:26 -- scripts/common.sh@394 -- # pt= 00:06:16.306 13:40:26 -- scripts/common.sh@395 -- # return 1 00:06:16.306 13:40:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:16.306 1+0 records in 00:06:16.306 1+0 records out 00:06:16.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536375 s, 195 MB/s 00:06:16.306 13:40:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:16.306 13:40:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:16.306 13:40:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:16.306 13:40:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:16.306 13:40:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:16.565 No valid GPT data, bailing 00:06:16.565 13:40:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:16.565 13:40:26 -- scripts/common.sh@394 -- # pt= 00:06:16.565 13:40:26 -- scripts/common.sh@395 -- # return 1 00:06:16.565 13:40:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:16.565 1+0 records in 00:06:16.565 1+0 records out 00:06:16.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572892 s, 183 MB/s 00:06:16.565 13:40:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:16.565 13:40:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:16.565 13:40:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:16.565 13:40:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:16.565 13:40:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:16.565 No valid GPT data, bailing 00:06:16.565 13:40:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:16.565 13:40:26 -- scripts/common.sh@394 -- # pt= 00:06:16.565 13:40:26 -- scripts/common.sh@395 -- # return 1 00:06:16.565 13:40:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:16.565 1+0 records in 00:06:16.565 1+0 records out 00:06:16.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055786 s, 188 MB/s 00:06:16.565 13:40:26 -- spdk/autotest.sh@105 -- # sync 00:06:16.565 13:40:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:16.565 13:40:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:16.565 13:40:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:19.855 13:40:29 -- spdk/autotest.sh@111 -- # uname -s 00:06:19.855 13:40:29 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:19.855 13:40:29 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:19.855 13:40:29 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:20.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.454 Hugepages 00:06:20.454 node hugesize free / total 00:06:20.454 node0 1048576kB 0 / 0 00:06:20.454 node0 2048kB 0 / 0 00:06:20.454 00:06:20.454 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:20.454 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:20.715 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:20.715 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:20.715 13:40:30 -- spdk/autotest.sh@117 -- # uname -s 00:06:20.715 13:40:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:20.715 13:40:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:20.715 13:40:30 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:21.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:21.655 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:21.915 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:21.915 13:40:31 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:22.849 13:40:32 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:22.849 13:40:32 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:22.849 13:40:32 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:22.849 13:40:32 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:22.849 13:40:32 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:22.849 13:40:32 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:22.849 13:40:32 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:22.849 13:40:32 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:22.849 13:40:32 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:23.107 13:40:33 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:23.107 13:40:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:23.107 13:40:33 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:23.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:23.623 Waiting for block devices as requested 00:06:23.623 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.623 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.891 13:40:33 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:23.891 13:40:33 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:23.891 13:40:33 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:23.891 13:40:33 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:23.891 13:40:33 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:23.891 13:40:33 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1541 -- # continue 00:06:23.891 13:40:33 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:23.891 13:40:33 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:23.891 13:40:33 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:23.891 13:40:33 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:23.891 13:40:33 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:23.891 13:40:33 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:23.891 13:40:33 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:23.891 13:40:33 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:23.891 13:40:33 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:23.891 13:40:33 -- common/autotest_common.sh@1541 -- # continue 00:06:23.891 13:40:33 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:23.891 13:40:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.891 13:40:33 -- common/autotest_common.sh@10 -- # set +x 00:06:23.891 13:40:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:23.891 13:40:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.891 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.891 13:40:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:24.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:25.098 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:25.098 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:25.098 13:40:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:25.098 13:40:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.098 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:25.098 13:40:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:25.098 13:40:35 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:25.098 13:40:35 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:25.098 13:40:35 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:25.098 13:40:35 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:25.098 13:40:35 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:25.098 13:40:35 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:25.098 13:40:35 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:25.098 13:40:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:25.098 13:40:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:25.098 13:40:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:25.098 13:40:35 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:25.098 13:40:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:25.357 13:40:35 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:25.357 13:40:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:25.357 13:40:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:25.357 13:40:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:25.357 13:40:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:25.357 13:40:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.357 13:40:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:25.357 13:40:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:25.357 13:40:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:25.357 13:40:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.357 13:40:35 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:25.357 13:40:35 -- common/autotest_common.sh@1570 -- # return 0 00:06:25.357 13:40:35 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:25.357 13:40:35 -- common/autotest_common.sh@1578 -- # return 0 00:06:25.357 13:40:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:25.357 13:40:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:25.357 13:40:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:25.357 13:40:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:25.357 13:40:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:25.357 13:40:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.357 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:25.357 13:40:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:25.357 13:40:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:25.357 13:40:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.357 13:40:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.357 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:06:25.357 ************************************ 00:06:25.357 START TEST env 00:06:25.357 ************************************ 00:06:25.357 13:40:35 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:25.357 * Looking for test storage... 00:06:25.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:25.357 13:40:35 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.357 13:40:35 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.357 13:40:35 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.616 13:40:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.616 13:40:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.616 13:40:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.616 13:40:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.616 13:40:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.616 13:40:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.616 13:40:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.616 13:40:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.616 13:40:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.616 13:40:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.616 13:40:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.616 13:40:35 env -- scripts/common.sh@344 -- # case "$op" in 00:06:25.616 13:40:35 env -- scripts/common.sh@345 -- # : 1 00:06:25.616 13:40:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.616 13:40:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.616 13:40:35 env -- scripts/common.sh@365 -- # decimal 1 00:06:25.616 13:40:35 env -- scripts/common.sh@353 -- # local d=1 00:06:25.616 13:40:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.616 13:40:35 env -- scripts/common.sh@355 -- # echo 1 00:06:25.616 13:40:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.616 13:40:35 env -- scripts/common.sh@366 -- # decimal 2 00:06:25.616 13:40:35 env -- scripts/common.sh@353 -- # local d=2 00:06:25.616 13:40:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.616 13:40:35 env -- scripts/common.sh@355 -- # echo 2 00:06:25.616 13:40:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.616 13:40:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.616 13:40:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.616 13:40:35 env -- scripts/common.sh@368 -- # return 0 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.616 --rc genhtml_branch_coverage=1 00:06:25.616 --rc genhtml_function_coverage=1 00:06:25.616 --rc genhtml_legend=1 00:06:25.616 --rc geninfo_all_blocks=1 00:06:25.616 --rc geninfo_unexecuted_blocks=1 00:06:25.616 00:06:25.616 ' 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.616 --rc genhtml_branch_coverage=1 00:06:25.616 --rc genhtml_function_coverage=1 00:06:25.616 --rc genhtml_legend=1 00:06:25.616 --rc geninfo_all_blocks=1 00:06:25.616 --rc geninfo_unexecuted_blocks=1 00:06:25.616 00:06:25.616 ' 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.616 --rc genhtml_branch_coverage=1 00:06:25.616 --rc genhtml_function_coverage=1 00:06:25.616 --rc genhtml_legend=1 00:06:25.616 --rc geninfo_all_blocks=1 00:06:25.616 --rc geninfo_unexecuted_blocks=1 00:06:25.616 00:06:25.616 ' 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.616 --rc genhtml_branch_coverage=1 00:06:25.616 --rc genhtml_function_coverage=1 00:06:25.616 --rc genhtml_legend=1 00:06:25.616 --rc geninfo_all_blocks=1 00:06:25.616 --rc geninfo_unexecuted_blocks=1 00:06:25.616 00:06:25.616 ' 00:06:25.616 13:40:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.616 13:40:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.616 13:40:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.616 ************************************ 00:06:25.616 START TEST env_memory 00:06:25.616 ************************************ 00:06:25.616 13:40:35 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:25.616 00:06:25.616 00:06:25.616 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.616 http://cunit.sourceforge.net/ 00:06:25.616 00:06:25.616 00:06:25.616 Suite: memory 00:06:25.616 Test: alloc and free memory map ...[2024-10-01 13:40:35.687596] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:25.616 passed 00:06:25.616 Test: mem map translation ...[2024-10-01 13:40:35.734775] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:25.616 [2024-10-01 13:40:35.734854] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:25.616 [2024-10-01 13:40:35.734919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:25.616 [2024-10-01 13:40:35.734939] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:25.616 passed 00:06:25.616 Test: mem map registration ...[2024-10-01 13:40:35.804637] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:25.616 [2024-10-01 13:40:35.804712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:25.875 passed 00:06:25.875 Test: mem map adjacent registrations ...passed 00:06:25.875 00:06:25.875 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.875 suites 1 1 n/a 0 0 00:06:25.875 tests 4 4 4 0 0 00:06:25.875 asserts 152 152 152 0 n/a 00:06:25.875 00:06:25.875 Elapsed time = 0.284 seconds 00:06:25.875 00:06:25.875 real 0m0.336s 00:06:25.875 user 0m0.303s 00:06:25.875 sys 0m0.026s 00:06:25.875 13:40:35 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.875 ************************************ 00:06:25.875 END TEST env_memory 00:06:25.875 ************************************ 00:06:25.875 13:40:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:25.875 13:40:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:25.875 13:40:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.875 13:40:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.875 13:40:36 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.875 ************************************ 00:06:25.875 START TEST env_vtophys 00:06:25.875 ************************************ 00:06:25.875 13:40:36 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:26.133 EAL: lib.eal log level changed from notice to debug 00:06:26.133 EAL: Detected lcore 0 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 1 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 2 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 3 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 4 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 5 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 6 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 7 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 8 as core 0 on socket 0 00:06:26.133 EAL: Detected lcore 9 as core 0 on socket 0 00:06:26.133 EAL: Maximum logical cores by configuration: 128 00:06:26.133 EAL: Detected CPU lcores: 10 00:06:26.133 EAL: Detected NUMA nodes: 1 00:06:26.133 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:26.133 EAL: Detected shared linkage of DPDK 00:06:26.133 EAL: No shared files mode enabled, IPC will be disabled 00:06:26.133 EAL: Selected IOVA mode 'PA' 00:06:26.133 EAL: Probing VFIO support... 00:06:26.133 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:26.133 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:26.133 EAL: Ask a virtual area of 0x2e000 bytes 00:06:26.133 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:26.133 EAL: Setting up physically contiguous memory... 00:06:26.133 EAL: Setting maximum number of open files to 524288 00:06:26.133 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:26.133 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:26.133 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.133 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:26.133 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.133 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.133 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:26.133 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:26.133 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.133 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:26.133 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.133 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.133 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:26.133 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:26.133 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.133 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:26.133 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.133 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.133 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:26.133 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:26.133 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.133 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:26.133 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.133 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.133 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:26.133 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:26.133 EAL: Hugepages will be freed exactly as allocated. 00:06:26.133 EAL: No shared files mode enabled, IPC is disabled 00:06:26.133 EAL: No shared files mode enabled, IPC is disabled 00:06:26.133 EAL: TSC frequency is ~2490000 KHz 00:06:26.133 EAL: Main lcore 0 is ready (tid=7f75b3523a40;cpuset=[0]) 00:06:26.133 EAL: Trying to obtain current memory policy. 00:06:26.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.133 EAL: Restoring previous memory policy: 0 00:06:26.133 EAL: request: mp_malloc_sync 00:06:26.133 EAL: No shared files mode enabled, IPC is disabled 00:06:26.133 EAL: Heap on socket 0 was expanded by 2MB 00:06:26.133 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:26.133 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:26.133 EAL: Mem event callback 'spdk:(nil)' registered 00:06:26.133 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:26.133 00:06:26.133 00:06:26.133 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.133 http://cunit.sourceforge.net/ 00:06:26.133 00:06:26.133 00:06:26.133 Suite: components_suite 00:06:26.701 Test: vtophys_malloc_test ...passed 00:06:26.701 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:26.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.701 EAL: Restoring previous memory policy: 4 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was expanded by 4MB 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was shrunk by 4MB 00:06:26.701 EAL: Trying to obtain current memory policy. 00:06:26.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.701 EAL: Restoring previous memory policy: 4 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was expanded by 6MB 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was shrunk by 6MB 00:06:26.701 EAL: Trying to obtain current memory policy. 00:06:26.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.701 EAL: Restoring previous memory policy: 4 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was expanded by 10MB 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was shrunk by 10MB 00:06:26.701 EAL: Trying to obtain current memory policy. 00:06:26.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.701 EAL: Restoring previous memory policy: 4 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was expanded by 18MB 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was shrunk by 18MB 00:06:26.701 EAL: Trying to obtain current memory policy. 00:06:26.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.701 EAL: Restoring previous memory policy: 4 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was expanded by 34MB 00:06:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.701 EAL: request: mp_malloc_sync 00:06:26.701 EAL: No shared files mode enabled, IPC is disabled 00:06:26.701 EAL: Heap on socket 0 was shrunk by 34MB 00:06:26.959 EAL: Trying to obtain current memory policy. 00:06:26.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.959 EAL: Restoring previous memory policy: 4 00:06:26.959 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.959 EAL: request: mp_malloc_sync 00:06:26.959 EAL: No shared files mode enabled, IPC is disabled 00:06:26.959 EAL: Heap on socket 0 was expanded by 66MB 00:06:26.959 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.959 EAL: request: mp_malloc_sync 00:06:26.959 EAL: No shared files mode enabled, IPC is disabled 00:06:26.959 EAL: Heap on socket 0 was shrunk by 66MB 00:06:27.218 EAL: Trying to obtain current memory policy. 00:06:27.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.218 EAL: Restoring previous memory policy: 4 00:06:27.218 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.218 EAL: request: mp_malloc_sync 00:06:27.218 EAL: No shared files mode enabled, IPC is disabled 00:06:27.218 EAL: Heap on socket 0 was expanded by 130MB 00:06:27.475 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.475 EAL: request: mp_malloc_sync 00:06:27.475 EAL: No shared files mode enabled, IPC is disabled 00:06:27.475 EAL: Heap on socket 0 was shrunk by 130MB 00:06:27.732 EAL: Trying to obtain current memory policy. 00:06:27.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.732 EAL: Restoring previous memory policy: 4 00:06:27.732 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.732 EAL: request: mp_malloc_sync 00:06:27.732 EAL: No shared files mode enabled, IPC is disabled 00:06:27.732 EAL: Heap on socket 0 was expanded by 258MB 00:06:27.990 EAL: Calling mem event callback 'spdk:(nil)' 00:06:28.248 EAL: request: mp_malloc_sync 00:06:28.248 EAL: No shared files mode enabled, IPC is disabled 00:06:28.248 EAL: Heap on socket 0 was shrunk by 258MB 00:06:28.506 EAL: Trying to obtain current memory policy. 00:06:28.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:28.797 EAL: Restoring previous memory policy: 4 00:06:28.797 EAL: Calling mem event callback 'spdk:(nil)' 00:06:28.797 EAL: request: mp_malloc_sync 00:06:28.797 EAL: No shared files mode enabled, IPC is disabled 00:06:28.797 EAL: Heap on socket 0 was expanded by 514MB 00:06:29.730 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.730 EAL: request: mp_malloc_sync 00:06:29.730 EAL: No shared files mode enabled, IPC is disabled 00:06:29.730 EAL: Heap on socket 0 was shrunk by 514MB 00:06:30.666 EAL: Trying to obtain current memory policy. 00:06:30.666 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.923 EAL: Restoring previous memory policy: 4 00:06:30.923 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.923 EAL: request: mp_malloc_sync 00:06:30.923 EAL: No shared files mode enabled, IPC is disabled 00:06:30.923 EAL: Heap on socket 0 was expanded by 1026MB 00:06:32.822 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.081 EAL: request: mp_malloc_sync 00:06:33.081 EAL: No shared files mode enabled, IPC is disabled 00:06:33.081 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:34.985 passed 00:06:34.985 00:06:34.985 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.985 suites 1 1 n/a 0 0 00:06:34.985 tests 2 2 2 0 0 00:06:34.985 asserts 5761 5761 5761 0 n/a 00:06:34.985 00:06:34.985 Elapsed time = 8.459 seconds 00:06:34.985 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.985 EAL: request: mp_malloc_sync 00:06:34.985 EAL: No shared files mode enabled, IPC is disabled 00:06:34.985 EAL: Heap on socket 0 was shrunk by 2MB 00:06:34.985 EAL: No shared files mode enabled, IPC is disabled 00:06:34.985 EAL: No shared files mode enabled, IPC is disabled 00:06:34.985 EAL: No shared files mode enabled, IPC is disabled 00:06:34.985 00:06:34.985 real 0m8.782s 00:06:34.985 user 0m7.755s 00:06:34.985 sys 0m0.866s 00:06:34.985 13:40:44 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.985 13:40:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:34.985 ************************************ 00:06:34.985 END TEST env_vtophys 00:06:34.985 ************************************ 00:06:34.985 13:40:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:34.985 13:40:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.985 13:40:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.985 13:40:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.985 ************************************ 00:06:34.985 START TEST env_pci 00:06:34.985 ************************************ 00:06:34.985 13:40:44 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:34.985 00:06:34.985 00:06:34.985 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.985 http://cunit.sourceforge.net/ 00:06:34.985 00:06:34.985 00:06:34.985 Suite: pci 00:06:34.985 Test: pci_hook ...[2024-10-01 13:40:44.913840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56534 has claimed it 00:06:34.985 passed 00:06:34.985 00:06:34.985 EAL: Cannot find device (10000:00:01.0) 00:06:34.985 EAL: Failed to attach device on primary process 00:06:34.985 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.985 suites 1 1 n/a 0 0 00:06:34.985 tests 1 1 1 0 0 00:06:34.985 asserts 25 25 25 0 n/a 00:06:34.985 00:06:34.985 Elapsed time = 0.012 seconds 00:06:34.985 00:06:34.985 real 0m0.118s 00:06:34.985 user 0m0.049s 00:06:34.985 sys 0m0.068s 00:06:34.985 ************************************ 00:06:34.985 END TEST env_pci 00:06:34.985 ************************************ 00:06:34.985 13:40:44 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.985 13:40:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:34.985 13:40:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:34.985 13:40:45 env -- env/env.sh@15 -- # uname 00:06:34.985 13:40:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:34.985 13:40:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:34.985 13:40:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.985 13:40:45 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:34.985 13:40:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.985 13:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.985 ************************************ 00:06:34.985 START TEST env_dpdk_post_init 00:06:34.985 ************************************ 00:06:34.985 13:40:45 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.985 EAL: Detected CPU lcores: 10 00:06:34.985 EAL: Detected NUMA nodes: 1 00:06:34.985 EAL: Detected shared linkage of DPDK 00:06:34.985 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:35.256 EAL: Selected IOVA mode 'PA' 00:06:35.256 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:35.256 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:35.256 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:35.256 Starting DPDK initialization... 00:06:35.256 Starting SPDK post initialization... 00:06:35.256 SPDK NVMe probe 00:06:35.256 Attaching to 0000:00:10.0 00:06:35.256 Attaching to 0000:00:11.0 00:06:35.256 Attached to 0000:00:10.0 00:06:35.256 Attached to 0000:00:11.0 00:06:35.256 Cleaning up... 00:06:35.256 00:06:35.256 real 0m0.310s 00:06:35.256 user 0m0.097s 00:06:35.256 sys 0m0.113s 00:06:35.256 13:40:45 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.256 13:40:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:35.256 ************************************ 00:06:35.256 END TEST env_dpdk_post_init 00:06:35.256 ************************************ 00:06:35.525 13:40:45 env -- env/env.sh@26 -- # uname 00:06:35.525 13:40:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:35.525 13:40:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:35.525 13:40:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.525 13:40:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.525 13:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.525 ************************************ 00:06:35.525 START TEST env_mem_callbacks 00:06:35.525 ************************************ 00:06:35.525 13:40:45 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:35.525 EAL: Detected CPU lcores: 10 00:06:35.525 EAL: Detected NUMA nodes: 1 00:06:35.525 EAL: Detected shared linkage of DPDK 00:06:35.525 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:35.525 EAL: Selected IOVA mode 'PA' 00:06:35.525 00:06:35.525 00:06:35.525 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.525 http://cunit.sourceforge.net/ 00:06:35.525 00:06:35.525 00:06:35.525 Suite: memory 00:06:35.525 Test: test ... 00:06:35.525 register 0x200000200000 2097152 00:06:35.525 malloc 3145728 00:06:35.525 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:35.525 register 0x200000400000 4194304 00:06:35.525 buf 0x2000004fffc0 len 3145728 PASSED 00:06:35.525 malloc 64 00:06:35.525 buf 0x2000004ffec0 len 64 PASSED 00:06:35.525 malloc 4194304 00:06:35.525 register 0x200000800000 6291456 00:06:35.525 buf 0x2000009fffc0 len 4194304 PASSED 00:06:35.525 free 0x2000004fffc0 3145728 00:06:35.525 free 0x2000004ffec0 64 00:06:35.525 unregister 0x200000400000 4194304 PASSED 00:06:35.526 free 0x2000009fffc0 4194304 00:06:35.526 unregister 0x200000800000 6291456 PASSED 00:06:35.526 malloc 8388608 00:06:35.526 register 0x200000400000 10485760 00:06:35.526 buf 0x2000005fffc0 len 8388608 PASSED 00:06:35.526 free 0x2000005fffc0 8388608 00:06:35.526 unregister 0x200000400000 10485760 PASSED 00:06:35.785 passed 00:06:35.785 00:06:35.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.785 suites 1 1 n/a 0 0 00:06:35.785 tests 1 1 1 0 0 00:06:35.785 asserts 15 15 15 0 n/a 00:06:35.785 00:06:35.785 Elapsed time = 0.081 seconds 00:06:35.785 00:06:35.785 real 0m0.292s 00:06:35.785 user 0m0.116s 00:06:35.785 sys 0m0.074s 00:06:35.785 13:40:45 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.785 ************************************ 00:06:35.785 END TEST env_mem_callbacks 00:06:35.785 13:40:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:35.785 ************************************ 00:06:35.785 00:06:35.785 real 0m10.449s 00:06:35.785 user 0m8.585s 00:06:35.785 sys 0m1.506s 00:06:35.785 13:40:45 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.785 13:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.785 ************************************ 00:06:35.785 END TEST env 00:06:35.785 ************************************ 00:06:35.785 13:40:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:35.785 13:40:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.785 13:40:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.785 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:06:35.785 ************************************ 00:06:35.785 START TEST rpc 00:06:35.785 ************************************ 00:06:35.785 13:40:45 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:36.045 * Looking for test storage... 00:06:36.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.045 13:40:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.045 13:40:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.045 13:40:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.045 13:40:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.045 13:40:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.045 13:40:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.045 13:40:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.045 13:40:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:36.045 13:40:46 rpc -- scripts/common.sh@345 -- # : 1 00:06:36.045 13:40:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.045 13:40:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.045 13:40:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:36.045 13:40:46 rpc -- scripts/common.sh@353 -- # local d=1 00:06:36.045 13:40:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.045 13:40:46 rpc -- scripts/common.sh@355 -- # echo 1 00:06:36.045 13:40:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.045 13:40:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@353 -- # local d=2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.045 13:40:46 rpc -- scripts/common.sh@355 -- # echo 2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.045 13:40:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.045 13:40:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.045 13:40:46 rpc -- scripts/common.sh@368 -- # return 0 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.045 --rc genhtml_branch_coverage=1 00:06:36.045 --rc genhtml_function_coverage=1 00:06:36.045 --rc genhtml_legend=1 00:06:36.045 --rc geninfo_all_blocks=1 00:06:36.045 --rc geninfo_unexecuted_blocks=1 00:06:36.045 00:06:36.045 ' 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.045 --rc genhtml_branch_coverage=1 00:06:36.045 --rc genhtml_function_coverage=1 00:06:36.045 --rc genhtml_legend=1 00:06:36.045 --rc geninfo_all_blocks=1 00:06:36.045 --rc geninfo_unexecuted_blocks=1 00:06:36.045 00:06:36.045 ' 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.045 --rc genhtml_branch_coverage=1 00:06:36.045 --rc genhtml_function_coverage=1 00:06:36.045 --rc genhtml_legend=1 00:06:36.045 --rc geninfo_all_blocks=1 00:06:36.045 --rc geninfo_unexecuted_blocks=1 00:06:36.045 00:06:36.045 ' 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.045 --rc genhtml_branch_coverage=1 00:06:36.045 --rc genhtml_function_coverage=1 00:06:36.045 --rc genhtml_legend=1 00:06:36.045 --rc geninfo_all_blocks=1 00:06:36.045 --rc geninfo_unexecuted_blocks=1 00:06:36.045 00:06:36.045 ' 00:06:36.045 13:40:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56661 00:06:36.045 13:40:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:36.045 13:40:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.045 13:40:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56661 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@831 -- # '[' -z 56661 ']' 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.045 13:40:46 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.046 13:40:46 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.046 13:40:46 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.046 13:40:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.046 [2024-10-01 13:40:46.222378] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:36.046 [2024-10-01 13:40:46.222547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56661 ] 00:06:36.305 [2024-10-01 13:40:46.385122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.564 [2024-10-01 13:40:46.676980] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:36.564 [2024-10-01 13:40:46.677064] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56661' to capture a snapshot of events at runtime. 00:06:36.564 [2024-10-01 13:40:46.677079] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.564 [2024-10-01 13:40:46.677097] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.564 [2024-10-01 13:40:46.677109] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56661 for offline analysis/debug. 00:06:36.564 [2024-10-01 13:40:46.677172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.505 13:40:47 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.505 13:40:47 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.505 13:40:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:37.505 13:40:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:37.505 13:40:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:37.505 13:40:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:37.505 13:40:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.505 13:40:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.505 13:40:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.505 ************************************ 00:06:37.505 START TEST rpc_integrity 00:06:37.505 ************************************ 00:06:37.505 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:37.505 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:37.505 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.505 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:37.766 { 00:06:37.766 "name": "Malloc0", 00:06:37.766 "aliases": [ 00:06:37.766 "16d97a22-bb4a-4814-a51b-56f737f21b99" 00:06:37.766 ], 00:06:37.766 "product_name": "Malloc disk", 00:06:37.766 "block_size": 512, 00:06:37.766 "num_blocks": 16384, 00:06:37.766 "uuid": "16d97a22-bb4a-4814-a51b-56f737f21b99", 00:06:37.766 "assigned_rate_limits": { 00:06:37.766 "rw_ios_per_sec": 0, 00:06:37.766 "rw_mbytes_per_sec": 0, 00:06:37.766 "r_mbytes_per_sec": 0, 00:06:37.766 "w_mbytes_per_sec": 0 00:06:37.766 }, 00:06:37.766 "claimed": false, 00:06:37.766 "zoned": false, 00:06:37.766 "supported_io_types": { 00:06:37.766 "read": true, 00:06:37.766 "write": true, 00:06:37.766 "unmap": true, 00:06:37.766 "flush": true, 00:06:37.766 "reset": true, 00:06:37.766 "nvme_admin": false, 00:06:37.766 "nvme_io": false, 00:06:37.766 "nvme_io_md": false, 00:06:37.766 "write_zeroes": true, 00:06:37.766 "zcopy": true, 00:06:37.766 "get_zone_info": false, 00:06:37.766 "zone_management": false, 00:06:37.766 "zone_append": false, 00:06:37.766 "compare": false, 00:06:37.766 "compare_and_write": false, 00:06:37.766 "abort": true, 00:06:37.766 "seek_hole": false, 00:06:37.766 "seek_data": false, 00:06:37.766 "copy": true, 00:06:37.766 "nvme_iov_md": false 00:06:37.766 }, 00:06:37.766 "memory_domains": [ 00:06:37.766 { 00:06:37.766 "dma_device_id": "system", 00:06:37.766 "dma_device_type": 1 00:06:37.766 }, 00:06:37.766 { 00:06:37.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.766 "dma_device_type": 2 00:06:37.766 } 00:06:37.766 ], 00:06:37.766 "driver_specific": {} 00:06:37.766 } 00:06:37.766 ]' 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.766 [2024-10-01 13:40:47.864867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:37.766 [2024-10-01 13:40:47.864972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.766 [2024-10-01 13:40:47.865004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:37.766 [2024-10-01 13:40:47.865020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.766 [2024-10-01 13:40:47.867704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.766 [2024-10-01 13:40:47.867750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:37.766 Passthru0 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.766 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.766 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:37.766 { 00:06:37.766 "name": "Malloc0", 00:06:37.766 "aliases": [ 00:06:37.766 "16d97a22-bb4a-4814-a51b-56f737f21b99" 00:06:37.767 ], 00:06:37.767 "product_name": "Malloc disk", 00:06:37.767 "block_size": 512, 00:06:37.767 "num_blocks": 16384, 00:06:37.767 "uuid": "16d97a22-bb4a-4814-a51b-56f737f21b99", 00:06:37.767 "assigned_rate_limits": { 00:06:37.767 "rw_ios_per_sec": 0, 00:06:37.767 "rw_mbytes_per_sec": 0, 00:06:37.767 "r_mbytes_per_sec": 0, 00:06:37.767 "w_mbytes_per_sec": 0 00:06:37.767 }, 00:06:37.767 "claimed": true, 00:06:37.767 "claim_type": "exclusive_write", 00:06:37.767 "zoned": false, 00:06:37.767 "supported_io_types": { 00:06:37.767 "read": true, 00:06:37.767 "write": true, 00:06:37.767 "unmap": true, 00:06:37.767 "flush": true, 00:06:37.767 "reset": true, 00:06:37.767 "nvme_admin": false, 00:06:37.767 "nvme_io": false, 00:06:37.767 "nvme_io_md": false, 00:06:37.767 "write_zeroes": true, 00:06:37.767 "zcopy": true, 00:06:37.767 "get_zone_info": false, 00:06:37.767 "zone_management": false, 00:06:37.767 "zone_append": false, 00:06:37.767 "compare": false, 00:06:37.767 "compare_and_write": false, 00:06:37.767 "abort": true, 00:06:37.767 "seek_hole": false, 00:06:37.767 "seek_data": false, 00:06:37.767 "copy": true, 00:06:37.767 "nvme_iov_md": false 00:06:37.767 }, 00:06:37.767 "memory_domains": [ 00:06:37.767 { 00:06:37.767 "dma_device_id": "system", 00:06:37.767 "dma_device_type": 1 00:06:37.767 }, 00:06:37.767 { 00:06:37.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.767 "dma_device_type": 2 00:06:37.767 } 00:06:37.767 ], 00:06:37.767 "driver_specific": {} 00:06:37.767 }, 00:06:37.767 { 00:06:37.767 "name": "Passthru0", 00:06:37.767 "aliases": [ 00:06:37.767 "17ee0fca-cfdd-592d-89b2-3d600d0c5ef0" 00:06:37.767 ], 00:06:37.767 "product_name": "passthru", 00:06:37.767 "block_size": 512, 00:06:37.767 "num_blocks": 16384, 00:06:37.767 "uuid": "17ee0fca-cfdd-592d-89b2-3d600d0c5ef0", 00:06:37.767 "assigned_rate_limits": { 00:06:37.767 "rw_ios_per_sec": 0, 00:06:37.767 "rw_mbytes_per_sec": 0, 00:06:37.767 "r_mbytes_per_sec": 0, 00:06:37.767 "w_mbytes_per_sec": 0 00:06:37.767 }, 00:06:37.767 "claimed": false, 00:06:37.767 "zoned": false, 00:06:37.767 "supported_io_types": { 00:06:37.767 "read": true, 00:06:37.767 "write": true, 00:06:37.767 "unmap": true, 00:06:37.767 "flush": true, 00:06:37.767 "reset": true, 00:06:37.767 "nvme_admin": false, 00:06:37.767 "nvme_io": false, 00:06:37.767 "nvme_io_md": false, 00:06:37.767 "write_zeroes": true, 00:06:37.767 "zcopy": true, 00:06:37.767 "get_zone_info": false, 00:06:37.767 "zone_management": false, 00:06:37.767 "zone_append": false, 00:06:37.767 "compare": false, 00:06:37.767 "compare_and_write": false, 00:06:37.767 "abort": true, 00:06:37.767 "seek_hole": false, 00:06:37.767 "seek_data": false, 00:06:37.767 "copy": true, 00:06:37.767 "nvme_iov_md": false 00:06:37.767 }, 00:06:37.767 "memory_domains": [ 00:06:37.767 { 00:06:37.767 "dma_device_id": "system", 00:06:37.767 "dma_device_type": 1 00:06:37.767 }, 00:06:37.767 { 00:06:37.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.767 "dma_device_type": 2 00:06:37.767 } 00:06:37.767 ], 00:06:37.767 "driver_specific": { 00:06:37.767 "passthru": { 00:06:37.767 "name": "Passthru0", 00:06:37.767 "base_bdev_name": "Malloc0" 00:06:37.767 } 00:06:37.767 } 00:06:37.767 } 00:06:37.767 ]' 00:06:37.767 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:37.767 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:37.767 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:37.767 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.767 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.026 13:40:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:38.026 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.026 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 13:40:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.026 13:40:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:38.026 13:40:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.026 13:40:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 13:40:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.026 13:40:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:38.026 13:40:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:38.026 13:40:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:38.026 00:06:38.026 real 0m0.374s 00:06:38.026 user 0m0.188s 00:06:38.026 sys 0m0.074s 00:06:38.026 ************************************ 00:06:38.026 END TEST rpc_integrity 00:06:38.026 ************************************ 00:06:38.026 13:40:48 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.026 13:40:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 13:40:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:38.026 13:40:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.026 13:40:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.026 13:40:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 ************************************ 00:06:38.026 START TEST rpc_plugins 00:06:38.026 ************************************ 00:06:38.026 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:38.026 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:38.026 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.026 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.026 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:38.026 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:38.026 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.026 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.026 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:38.026 { 00:06:38.026 "name": "Malloc1", 00:06:38.026 "aliases": [ 00:06:38.026 "0d34b06d-46b9-4ed2-bacd-b359c326b371" 00:06:38.026 ], 00:06:38.026 "product_name": "Malloc disk", 00:06:38.026 "block_size": 4096, 00:06:38.026 "num_blocks": 256, 00:06:38.026 "uuid": "0d34b06d-46b9-4ed2-bacd-b359c326b371", 00:06:38.026 "assigned_rate_limits": { 00:06:38.026 "rw_ios_per_sec": 0, 00:06:38.026 "rw_mbytes_per_sec": 0, 00:06:38.026 "r_mbytes_per_sec": 0, 00:06:38.026 "w_mbytes_per_sec": 0 00:06:38.026 }, 00:06:38.026 "claimed": false, 00:06:38.026 "zoned": false, 00:06:38.026 "supported_io_types": { 00:06:38.026 "read": true, 00:06:38.026 "write": true, 00:06:38.026 "unmap": true, 00:06:38.026 "flush": true, 00:06:38.026 "reset": true, 00:06:38.026 "nvme_admin": false, 00:06:38.026 "nvme_io": false, 00:06:38.026 "nvme_io_md": false, 00:06:38.027 "write_zeroes": true, 00:06:38.027 "zcopy": true, 00:06:38.027 "get_zone_info": false, 00:06:38.027 "zone_management": false, 00:06:38.027 "zone_append": false, 00:06:38.027 "compare": false, 00:06:38.027 "compare_and_write": false, 00:06:38.027 "abort": true, 00:06:38.027 "seek_hole": false, 00:06:38.027 "seek_data": false, 00:06:38.027 "copy": true, 00:06:38.027 "nvme_iov_md": false 00:06:38.027 }, 00:06:38.027 "memory_domains": [ 00:06:38.027 { 00:06:38.027 "dma_device_id": "system", 00:06:38.027 "dma_device_type": 1 00:06:38.027 }, 00:06:38.027 { 00:06:38.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.027 "dma_device_type": 2 00:06:38.027 } 00:06:38.027 ], 00:06:38.027 "driver_specific": {} 00:06:38.027 } 00:06:38.027 ]' 00:06:38.027 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:38.286 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:38.286 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.286 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.286 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:38.286 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:38.286 13:40:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:38.286 00:06:38.286 real 0m0.189s 00:06:38.286 user 0m0.113s 00:06:38.286 sys 0m0.027s 00:06:38.286 ************************************ 00:06:38.286 END TEST rpc_plugins 00:06:38.286 ************************************ 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.286 13:40:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:38.286 13:40:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:38.286 13:40:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.286 13:40:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.286 13:40:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.286 ************************************ 00:06:38.286 START TEST rpc_trace_cmd_test 00:06:38.286 ************************************ 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:38.286 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56661", 00:06:38.286 "tpoint_group_mask": "0x8", 00:06:38.286 "iscsi_conn": { 00:06:38.286 "mask": "0x2", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "scsi": { 00:06:38.286 "mask": "0x4", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "bdev": { 00:06:38.286 "mask": "0x8", 00:06:38.286 "tpoint_mask": "0xffffffffffffffff" 00:06:38.286 }, 00:06:38.286 "nvmf_rdma": { 00:06:38.286 "mask": "0x10", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "nvmf_tcp": { 00:06:38.286 "mask": "0x20", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "ftl": { 00:06:38.286 "mask": "0x40", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "blobfs": { 00:06:38.286 "mask": "0x80", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "dsa": { 00:06:38.286 "mask": "0x200", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "thread": { 00:06:38.286 "mask": "0x400", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "nvme_pcie": { 00:06:38.286 "mask": "0x800", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "iaa": { 00:06:38.286 "mask": "0x1000", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "nvme_tcp": { 00:06:38.286 "mask": "0x2000", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "bdev_nvme": { 00:06:38.286 "mask": "0x4000", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "sock": { 00:06:38.286 "mask": "0x8000", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "blob": { 00:06:38.286 "mask": "0x10000", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 }, 00:06:38.286 "bdev_raid": { 00:06:38.286 "mask": "0x20000", 00:06:38.286 "tpoint_mask": "0x0" 00:06:38.286 } 00:06:38.286 }' 00:06:38.286 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:38.546 00:06:38.546 real 0m0.251s 00:06:38.546 user 0m0.198s 00:06:38.546 sys 0m0.042s 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.546 ************************************ 00:06:38.546 END TEST rpc_trace_cmd_test 00:06:38.546 ************************************ 00:06:38.546 13:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.546 13:40:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:38.546 13:40:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:38.546 13:40:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:38.546 13:40:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.546 13:40:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.546 13:40:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.546 ************************************ 00:06:38.546 START TEST rpc_daemon_integrity 00:06:38.546 ************************************ 00:06:38.546 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:38.546 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:38.546 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.546 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.807 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.807 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:38.807 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:38.808 { 00:06:38.808 "name": "Malloc2", 00:06:38.808 "aliases": [ 00:06:38.808 "9c1cff05-15a1-4cf5-b5a8-6fd5f418fb1f" 00:06:38.808 ], 00:06:38.808 "product_name": "Malloc disk", 00:06:38.808 "block_size": 512, 00:06:38.808 "num_blocks": 16384, 00:06:38.808 "uuid": "9c1cff05-15a1-4cf5-b5a8-6fd5f418fb1f", 00:06:38.808 "assigned_rate_limits": { 00:06:38.808 "rw_ios_per_sec": 0, 00:06:38.808 "rw_mbytes_per_sec": 0, 00:06:38.808 "r_mbytes_per_sec": 0, 00:06:38.808 "w_mbytes_per_sec": 0 00:06:38.808 }, 00:06:38.808 "claimed": false, 00:06:38.808 "zoned": false, 00:06:38.808 "supported_io_types": { 00:06:38.808 "read": true, 00:06:38.808 "write": true, 00:06:38.808 "unmap": true, 00:06:38.808 "flush": true, 00:06:38.808 "reset": true, 00:06:38.808 "nvme_admin": false, 00:06:38.808 "nvme_io": false, 00:06:38.808 "nvme_io_md": false, 00:06:38.808 "write_zeroes": true, 00:06:38.808 "zcopy": true, 00:06:38.808 "get_zone_info": false, 00:06:38.808 "zone_management": false, 00:06:38.808 "zone_append": false, 00:06:38.808 "compare": false, 00:06:38.808 "compare_and_write": false, 00:06:38.808 "abort": true, 00:06:38.808 "seek_hole": false, 00:06:38.808 "seek_data": false, 00:06:38.808 "copy": true, 00:06:38.808 "nvme_iov_md": false 00:06:38.808 }, 00:06:38.808 "memory_domains": [ 00:06:38.808 { 00:06:38.808 "dma_device_id": "system", 00:06:38.808 "dma_device_type": 1 00:06:38.808 }, 00:06:38.808 { 00:06:38.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.808 "dma_device_type": 2 00:06:38.808 } 00:06:38.808 ], 00:06:38.808 "driver_specific": {} 00:06:38.808 } 00:06:38.808 ]' 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.808 [2024-10-01 13:40:48.892562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:38.808 [2024-10-01 13:40:48.892635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.808 [2024-10-01 13:40:48.892661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:38.808 [2024-10-01 13:40:48.892676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.808 [2024-10-01 13:40:48.895313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.808 [2024-10-01 13:40:48.895356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:38.808 Passthru0 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:38.808 { 00:06:38.808 "name": "Malloc2", 00:06:38.808 "aliases": [ 00:06:38.808 "9c1cff05-15a1-4cf5-b5a8-6fd5f418fb1f" 00:06:38.808 ], 00:06:38.808 "product_name": "Malloc disk", 00:06:38.808 "block_size": 512, 00:06:38.808 "num_blocks": 16384, 00:06:38.808 "uuid": "9c1cff05-15a1-4cf5-b5a8-6fd5f418fb1f", 00:06:38.808 "assigned_rate_limits": { 00:06:38.808 "rw_ios_per_sec": 0, 00:06:38.808 "rw_mbytes_per_sec": 0, 00:06:38.808 "r_mbytes_per_sec": 0, 00:06:38.808 "w_mbytes_per_sec": 0 00:06:38.808 }, 00:06:38.808 "claimed": true, 00:06:38.808 "claim_type": "exclusive_write", 00:06:38.808 "zoned": false, 00:06:38.808 "supported_io_types": { 00:06:38.808 "read": true, 00:06:38.808 "write": true, 00:06:38.808 "unmap": true, 00:06:38.808 "flush": true, 00:06:38.808 "reset": true, 00:06:38.808 "nvme_admin": false, 00:06:38.808 "nvme_io": false, 00:06:38.808 "nvme_io_md": false, 00:06:38.808 "write_zeroes": true, 00:06:38.808 "zcopy": true, 00:06:38.808 "get_zone_info": false, 00:06:38.808 "zone_management": false, 00:06:38.808 "zone_append": false, 00:06:38.808 "compare": false, 00:06:38.808 "compare_and_write": false, 00:06:38.808 "abort": true, 00:06:38.808 "seek_hole": false, 00:06:38.808 "seek_data": false, 00:06:38.808 "copy": true, 00:06:38.808 "nvme_iov_md": false 00:06:38.808 }, 00:06:38.808 "memory_domains": [ 00:06:38.808 { 00:06:38.808 "dma_device_id": "system", 00:06:38.808 "dma_device_type": 1 00:06:38.808 }, 00:06:38.808 { 00:06:38.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.808 "dma_device_type": 2 00:06:38.808 } 00:06:38.808 ], 00:06:38.808 "driver_specific": {} 00:06:38.808 }, 00:06:38.808 { 00:06:38.808 "name": "Passthru0", 00:06:38.808 "aliases": [ 00:06:38.808 "3dfd0c32-9a65-523f-817e-84db979582bf" 00:06:38.808 ], 00:06:38.808 "product_name": "passthru", 00:06:38.808 "block_size": 512, 00:06:38.808 "num_blocks": 16384, 00:06:38.808 "uuid": "3dfd0c32-9a65-523f-817e-84db979582bf", 00:06:38.808 "assigned_rate_limits": { 00:06:38.808 "rw_ios_per_sec": 0, 00:06:38.808 "rw_mbytes_per_sec": 0, 00:06:38.808 "r_mbytes_per_sec": 0, 00:06:38.808 "w_mbytes_per_sec": 0 00:06:38.808 }, 00:06:38.808 "claimed": false, 00:06:38.808 "zoned": false, 00:06:38.808 "supported_io_types": { 00:06:38.808 "read": true, 00:06:38.808 "write": true, 00:06:38.808 "unmap": true, 00:06:38.808 "flush": true, 00:06:38.808 "reset": true, 00:06:38.808 "nvme_admin": false, 00:06:38.808 "nvme_io": false, 00:06:38.808 "nvme_io_md": false, 00:06:38.808 "write_zeroes": true, 00:06:38.808 "zcopy": true, 00:06:38.808 "get_zone_info": false, 00:06:38.808 "zone_management": false, 00:06:38.808 "zone_append": false, 00:06:38.808 "compare": false, 00:06:38.808 "compare_and_write": false, 00:06:38.808 "abort": true, 00:06:38.808 "seek_hole": false, 00:06:38.808 "seek_data": false, 00:06:38.808 "copy": true, 00:06:38.808 "nvme_iov_md": false 00:06:38.808 }, 00:06:38.808 "memory_domains": [ 00:06:38.808 { 00:06:38.808 "dma_device_id": "system", 00:06:38.808 "dma_device_type": 1 00:06:38.808 }, 00:06:38.808 { 00:06:38.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.808 "dma_device_type": 2 00:06:38.808 } 00:06:38.808 ], 00:06:38.808 "driver_specific": { 00:06:38.808 "passthru": { 00:06:38.808 "name": "Passthru0", 00:06:38.808 "base_bdev_name": "Malloc2" 00:06:38.808 } 00:06:38.808 } 00:06:38.808 } 00:06:38.808 ]' 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.808 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:39.070 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.070 13:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:39.070 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.070 13:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:39.070 00:06:39.070 real 0m0.378s 00:06:39.070 user 0m0.206s 00:06:39.070 sys 0m0.067s 00:06:39.070 ************************************ 00:06:39.070 END TEST rpc_daemon_integrity 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.070 13:40:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:39.070 ************************************ 00:06:39.070 13:40:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:39.070 13:40:49 rpc -- rpc/rpc.sh@84 -- # killprocess 56661 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@950 -- # '[' -z 56661 ']' 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@954 -- # kill -0 56661 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@955 -- # uname 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56661 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.070 killing process with pid 56661 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56661' 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@969 -- # kill 56661 00:06:39.070 13:40:49 rpc -- common/autotest_common.sh@974 -- # wait 56661 00:06:42.358 00:06:42.358 real 0m5.972s 00:06:42.358 user 0m6.444s 00:06:42.358 sys 0m1.128s 00:06:42.358 13:40:51 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.358 13:40:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.358 ************************************ 00:06:42.358 END TEST rpc 00:06:42.358 ************************************ 00:06:42.358 13:40:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:42.358 13:40:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.358 13:40:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.358 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.358 ************************************ 00:06:42.358 START TEST skip_rpc 00:06:42.358 ************************************ 00:06:42.358 13:40:51 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:42.358 * Looking for test storage... 00:06:42.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:42.358 13:40:51 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.358 13:40:51 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.358 13:40:51 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.358 13:40:52 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.358 13:40:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:42.358 13:40:52 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.358 13:40:52 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.358 --rc genhtml_branch_coverage=1 00:06:42.358 --rc genhtml_function_coverage=1 00:06:42.358 --rc genhtml_legend=1 00:06:42.358 --rc geninfo_all_blocks=1 00:06:42.358 --rc geninfo_unexecuted_blocks=1 00:06:42.358 00:06:42.358 ' 00:06:42.358 13:40:52 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.358 --rc genhtml_branch_coverage=1 00:06:42.358 --rc genhtml_function_coverage=1 00:06:42.358 --rc genhtml_legend=1 00:06:42.358 --rc geninfo_all_blocks=1 00:06:42.358 --rc geninfo_unexecuted_blocks=1 00:06:42.358 00:06:42.358 ' 00:06:42.358 13:40:52 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.358 --rc genhtml_branch_coverage=1 00:06:42.358 --rc genhtml_function_coverage=1 00:06:42.358 --rc genhtml_legend=1 00:06:42.358 --rc geninfo_all_blocks=1 00:06:42.358 --rc geninfo_unexecuted_blocks=1 00:06:42.358 00:06:42.358 ' 00:06:42.358 13:40:52 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.358 --rc genhtml_branch_coverage=1 00:06:42.358 --rc genhtml_function_coverage=1 00:06:42.358 --rc genhtml_legend=1 00:06:42.358 --rc geninfo_all_blocks=1 00:06:42.358 --rc geninfo_unexecuted_blocks=1 00:06:42.358 00:06:42.358 ' 00:06:42.358 13:40:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:42.358 13:40:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:42.358 13:40:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:42.359 13:40:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.359 13:40:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.359 13:40:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.359 ************************************ 00:06:42.359 START TEST skip_rpc 00:06:42.359 ************************************ 00:06:42.359 13:40:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:42.359 13:40:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56901 00:06:42.359 13:40:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:42.359 13:40:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.359 13:40:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:42.359 [2024-10-01 13:40:52.177150] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:42.359 [2024-10-01 13:40:52.177625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56901 ] 00:06:42.359 [2024-10-01 13:40:52.348727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.619 [2024-10-01 13:40:52.617783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56901 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56901 ']' 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56901 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56901 00:06:47.890 killing process with pid 56901 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56901' 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56901 00:06:47.890 13:40:57 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56901 00:06:49.796 00:06:49.796 real 0m7.646s 00:06:49.796 user 0m7.123s 00:06:49.796 sys 0m0.440s 00:06:49.796 13:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.796 ************************************ 00:06:49.796 END TEST skip_rpc 00:06:49.796 ************************************ 00:06:49.796 13:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.796 13:40:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:49.796 13:40:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.796 13:40:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.796 13:40:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.796 ************************************ 00:06:49.796 START TEST skip_rpc_with_json 00:06:49.796 ************************************ 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57005 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57005 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57005 ']' 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.796 13:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.796 [2024-10-01 13:40:59.885566] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:49.796 [2024-10-01 13:40:59.885700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57005 ] 00:06:50.054 [2024-10-01 13:41:00.049930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.356 [2024-10-01 13:41:00.271122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.296 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.296 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:51.296 13:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:51.296 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.296 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.296 [2024-10-01 13:41:01.155761] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:51.296 request: 00:06:51.296 { 00:06:51.296 "trtype": "tcp", 00:06:51.296 "method": "nvmf_get_transports", 00:06:51.296 "req_id": 1 00:06:51.296 } 00:06:51.296 Got JSON-RPC error response 00:06:51.296 response: 00:06:51.296 { 00:06:51.296 "code": -19, 00:06:51.296 "message": "No such device" 00:06:51.296 } 00:06:51.296 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.297 [2024-10-01 13:41:01.171866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.297 13:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:51.297 { 00:06:51.297 "subsystems": [ 00:06:51.297 { 00:06:51.297 "subsystem": "fsdev", 00:06:51.297 "config": [ 00:06:51.297 { 00:06:51.297 "method": "fsdev_set_opts", 00:06:51.297 "params": { 00:06:51.297 "fsdev_io_pool_size": 65535, 00:06:51.297 "fsdev_io_cache_size": 256 00:06:51.297 } 00:06:51.297 } 00:06:51.297 ] 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "subsystem": "keyring", 00:06:51.297 "config": [] 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "subsystem": "iobuf", 00:06:51.297 "config": [ 00:06:51.297 { 00:06:51.297 "method": "iobuf_set_options", 00:06:51.297 "params": { 00:06:51.297 "small_pool_count": 8192, 00:06:51.297 "large_pool_count": 1024, 00:06:51.297 "small_bufsize": 8192, 00:06:51.297 "large_bufsize": 135168 00:06:51.297 } 00:06:51.297 } 00:06:51.297 ] 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "subsystem": "sock", 00:06:51.297 "config": [ 00:06:51.297 { 00:06:51.297 "method": "sock_set_default_impl", 00:06:51.297 "params": { 00:06:51.297 "impl_name": "posix" 00:06:51.297 } 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "method": "sock_impl_set_options", 00:06:51.297 "params": { 00:06:51.297 "impl_name": "ssl", 00:06:51.297 "recv_buf_size": 4096, 00:06:51.297 "send_buf_size": 4096, 00:06:51.297 "enable_recv_pipe": true, 00:06:51.297 "enable_quickack": false, 00:06:51.297 "enable_placement_id": 0, 00:06:51.297 "enable_zerocopy_send_server": true, 00:06:51.297 "enable_zerocopy_send_client": false, 00:06:51.297 "zerocopy_threshold": 0, 00:06:51.297 "tls_version": 0, 00:06:51.297 "enable_ktls": false 00:06:51.297 } 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "method": "sock_impl_set_options", 00:06:51.297 "params": { 00:06:51.297 "impl_name": "posix", 00:06:51.297 "recv_buf_size": 2097152, 00:06:51.297 "send_buf_size": 2097152, 00:06:51.297 "enable_recv_pipe": true, 00:06:51.297 "enable_quickack": false, 00:06:51.297 "enable_placement_id": 0, 00:06:51.297 "enable_zerocopy_send_server": true, 00:06:51.297 "enable_zerocopy_send_client": false, 00:06:51.297 "zerocopy_threshold": 0, 00:06:51.297 "tls_version": 0, 00:06:51.297 "enable_ktls": false 00:06:51.297 } 00:06:51.297 } 00:06:51.297 ] 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "subsystem": "vmd", 00:06:51.297 "config": [] 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "subsystem": "accel", 00:06:51.297 "config": [ 00:06:51.297 { 00:06:51.297 "method": "accel_set_options", 00:06:51.297 "params": { 00:06:51.297 "small_cache_size": 128, 00:06:51.297 "large_cache_size": 16, 00:06:51.297 "task_count": 2048, 00:06:51.297 "sequence_count": 2048, 00:06:51.297 "buf_count": 2048 00:06:51.297 } 00:06:51.297 } 00:06:51.297 ] 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "subsystem": "bdev", 00:06:51.297 "config": [ 00:06:51.297 { 00:06:51.297 "method": "bdev_set_options", 00:06:51.297 "params": { 00:06:51.297 "bdev_io_pool_size": 65535, 00:06:51.297 "bdev_io_cache_size": 256, 00:06:51.297 "bdev_auto_examine": true, 00:06:51.297 "iobuf_small_cache_size": 128, 00:06:51.297 "iobuf_large_cache_size": 16 00:06:51.297 } 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "method": "bdev_raid_set_options", 00:06:51.297 "params": { 00:06:51.297 "process_window_size_kb": 1024, 00:06:51.297 "process_max_bandwidth_mb_sec": 0 00:06:51.297 } 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "method": "bdev_iscsi_set_options", 00:06:51.297 "params": { 00:06:51.297 "timeout_sec": 30 00:06:51.297 } 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "method": "bdev_nvme_set_options", 00:06:51.297 "params": { 00:06:51.297 "action_on_timeout": "none", 00:06:51.297 "timeout_us": 0, 00:06:51.297 "timeout_admin_us": 0, 00:06:51.297 "keep_alive_timeout_ms": 10000, 00:06:51.297 "arbitration_burst": 0, 00:06:51.297 "low_priority_weight": 0, 00:06:51.297 "medium_priority_weight": 0, 00:06:51.297 "high_priority_weight": 0, 00:06:51.297 "nvme_adminq_poll_period_us": 10000, 00:06:51.297 "nvme_ioq_poll_period_us": 0, 00:06:51.297 "io_queue_requests": 0, 00:06:51.297 "delay_cmd_submit": true, 00:06:51.297 "transport_retry_count": 4, 00:06:51.297 "bdev_retry_count": 3, 00:06:51.297 "transport_ack_timeout": 0, 00:06:51.297 "ctrlr_loss_timeout_sec": 0, 00:06:51.297 "reconnect_delay_sec": 0, 00:06:51.297 "fast_io_fail_timeout_sec": 0, 00:06:51.297 "disable_auto_failback": false, 00:06:51.297 "generate_uuids": false, 00:06:51.297 "transport_tos": 0, 00:06:51.297 "nvme_error_stat": false, 00:06:51.297 "rdma_srq_size": 0, 00:06:51.297 "io_path_stat": false, 00:06:51.297 "allow_accel_sequence": false, 00:06:51.297 "rdma_max_cq_size": 0, 00:06:51.297 "rdma_cm_event_timeout_ms": 0, 00:06:51.297 "dhchap_digests": [ 00:06:51.297 "sha256", 00:06:51.297 "sha384", 00:06:51.297 "sha512" 00:06:51.297 ], 00:06:51.297 "dhchap_dhgroups": [ 00:06:51.297 "null", 00:06:51.297 "ffdhe2048", 00:06:51.297 "ffdhe3072", 00:06:51.297 "ffdhe4096", 00:06:51.297 "ffdhe6144", 00:06:51.297 "ffdhe8192" 00:06:51.297 ] 00:06:51.297 } 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "method": "bdev_nvme_set_hotplug", 00:06:51.297 "params": { 00:06:51.297 "period_us": 100000, 00:06:51.297 "enable": false 00:06:51.297 } 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "method": "bdev_wait_for_examine" 00:06:51.297 } 00:06:51.297 ] 00:06:51.297 }, 00:06:51.297 { 00:06:51.297 "subsystem": "scsi", 00:06:51.297 "config": null 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "subsystem": "scheduler", 00:06:51.298 "config": [ 00:06:51.298 { 00:06:51.298 "method": "framework_set_scheduler", 00:06:51.298 "params": { 00:06:51.298 "name": "static" 00:06:51.298 } 00:06:51.298 } 00:06:51.298 ] 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "subsystem": "vhost_scsi", 00:06:51.298 "config": [] 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "subsystem": "vhost_blk", 00:06:51.298 "config": [] 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "subsystem": "ublk", 00:06:51.298 "config": [] 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "subsystem": "nbd", 00:06:51.298 "config": [] 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "subsystem": "nvmf", 00:06:51.298 "config": [ 00:06:51.298 { 00:06:51.298 "method": "nvmf_set_config", 00:06:51.298 "params": { 00:06:51.298 "discovery_filter": "match_any", 00:06:51.298 "admin_cmd_passthru": { 00:06:51.298 "identify_ctrlr": false 00:06:51.298 }, 00:06:51.298 "dhchap_digests": [ 00:06:51.298 "sha256", 00:06:51.298 "sha384", 00:06:51.298 "sha512" 00:06:51.298 ], 00:06:51.298 "dhchap_dhgroups": [ 00:06:51.298 "null", 00:06:51.298 "ffdhe2048", 00:06:51.298 "ffdhe3072", 00:06:51.298 "ffdhe4096", 00:06:51.298 "ffdhe6144", 00:06:51.298 "ffdhe8192" 00:06:51.298 ] 00:06:51.298 } 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "method": "nvmf_set_max_subsystems", 00:06:51.298 "params": { 00:06:51.298 "max_subsystems": 1024 00:06:51.298 } 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "method": "nvmf_set_crdt", 00:06:51.298 "params": { 00:06:51.298 "crdt1": 0, 00:06:51.298 "crdt2": 0, 00:06:51.298 "crdt3": 0 00:06:51.298 } 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "method": "nvmf_create_transport", 00:06:51.298 "params": { 00:06:51.298 "trtype": "TCP", 00:06:51.298 "max_queue_depth": 128, 00:06:51.298 "max_io_qpairs_per_ctrlr": 127, 00:06:51.298 "in_capsule_data_size": 4096, 00:06:51.298 "max_io_size": 131072, 00:06:51.298 "io_unit_size": 131072, 00:06:51.298 "max_aq_depth": 128, 00:06:51.298 "num_shared_buffers": 511, 00:06:51.298 "buf_cache_size": 4294967295, 00:06:51.298 "dif_insert_or_strip": false, 00:06:51.298 "zcopy": false, 00:06:51.298 "c2h_success": true, 00:06:51.298 "sock_priority": 0, 00:06:51.298 "abort_timeout_sec": 1, 00:06:51.298 "ack_timeout": 0, 00:06:51.298 "data_wr_pool_size": 0 00:06:51.298 } 00:06:51.298 } 00:06:51.298 ] 00:06:51.298 }, 00:06:51.298 { 00:06:51.298 "subsystem": "iscsi", 00:06:51.298 "config": [ 00:06:51.298 { 00:06:51.298 "method": "iscsi_set_options", 00:06:51.298 "params": { 00:06:51.298 "node_base": "iqn.2016-06.io.spdk", 00:06:51.298 "max_sessions": 128, 00:06:51.298 "max_connections_per_session": 2, 00:06:51.298 "max_queue_depth": 64, 00:06:51.298 "default_time2wait": 2, 00:06:51.298 "default_time2retain": 20, 00:06:51.298 "first_burst_length": 8192, 00:06:51.298 "immediate_data": true, 00:06:51.298 "allow_duplicated_isid": false, 00:06:51.298 "error_recovery_level": 0, 00:06:51.298 "nop_timeout": 60, 00:06:51.298 "nop_in_interval": 30, 00:06:51.298 "disable_chap": false, 00:06:51.298 "require_chap": false, 00:06:51.298 "mutual_chap": false, 00:06:51.298 "chap_group": 0, 00:06:51.298 "max_large_datain_per_connection": 64, 00:06:51.298 "max_r2t_per_connection": 4, 00:06:51.298 "pdu_pool_size": 36864, 00:06:51.298 "immediate_data_pool_size": 16384, 00:06:51.298 "data_out_pool_size": 2048 00:06:51.298 } 00:06:51.298 } 00:06:51.298 ] 00:06:51.298 } 00:06:51.298 ] 00:06:51.298 } 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57005 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57005 ']' 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57005 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57005 00:06:51.298 killing process with pid 57005 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57005' 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57005 00:06:51.298 13:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57005 00:06:53.833 13:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57061 00:06:53.833 13:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:53.833 13:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57061 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57061 ']' 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57061 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57061 00:06:59.101 killing process with pid 57061 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57061' 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57061 00:06:59.101 13:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57061 00:07:01.644 13:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:01.644 13:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:01.644 ************************************ 00:07:01.644 END TEST skip_rpc_with_json 00:07:01.644 ************************************ 00:07:01.644 00:07:01.644 real 0m12.036s 00:07:01.644 user 0m11.393s 00:07:01.644 sys 0m0.980s 00:07:01.644 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.644 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:01.914 13:41:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:01.914 13:41:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.914 13:41:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.914 13:41:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.914 ************************************ 00:07:01.914 START TEST skip_rpc_with_delay 00:07:01.914 ************************************ 00:07:01.914 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:01.915 13:41:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.915 [2024-10-01 13:41:12.030768] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:01.915 [2024-10-01 13:41:12.030990] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:02.173 13:41:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:02.173 13:41:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.173 13:41:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.173 13:41:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.173 00:07:02.173 real 0m0.219s 00:07:02.173 user 0m0.105s 00:07:02.173 sys 0m0.111s 00:07:02.173 13:41:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.173 13:41:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:02.173 ************************************ 00:07:02.173 END TEST skip_rpc_with_delay 00:07:02.173 ************************************ 00:07:02.173 13:41:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:02.173 13:41:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:02.173 13:41:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:02.173 13:41:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.173 13:41:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.173 13:41:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.173 ************************************ 00:07:02.173 START TEST exit_on_failed_rpc_init 00:07:02.173 ************************************ 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57200 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57200 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57200 ']' 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.173 13:41:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.173 [2024-10-01 13:41:12.325993] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:02.173 [2024-10-01 13:41:12.326164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57200 ] 00:07:02.431 [2024-10-01 13:41:12.507247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.690 [2024-10-01 13:41:12.725998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:03.625 13:41:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:03.625 [2024-10-01 13:41:13.768661] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:03.625 [2024-10-01 13:41:13.769006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57224 ] 00:07:03.884 [2024-10-01 13:41:13.945359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.143 [2024-10-01 13:41:14.177589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.143 [2024-10-01 13:41:14.177717] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:04.143 [2024-10-01 13:41:14.177740] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:04.143 [2024-10-01 13:41:14.177761] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57200 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57200 ']' 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57200 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57200 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57200' 00:07:04.711 killing process with pid 57200 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57200 00:07:04.711 13:41:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57200 00:07:07.280 ************************************ 00:07:07.280 END TEST exit_on_failed_rpc_init 00:07:07.280 ************************************ 00:07:07.280 00:07:07.280 real 0m5.115s 00:07:07.280 user 0m5.686s 00:07:07.280 sys 0m0.710s 00:07:07.280 13:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.280 13:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:07.280 13:41:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:07.280 ************************************ 00:07:07.280 END TEST skip_rpc 00:07:07.280 ************************************ 00:07:07.280 00:07:07.280 real 0m25.486s 00:07:07.280 user 0m24.524s 00:07:07.280 sys 0m2.496s 00:07:07.280 13:41:17 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.280 13:41:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.280 13:41:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:07.280 13:41:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.280 13:41:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.280 13:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:07.280 ************************************ 00:07:07.280 START TEST rpc_client 00:07:07.280 ************************************ 00:07:07.280 13:41:17 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:07.540 * Looking for test storage... 00:07:07.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.540 13:41:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.540 --rc genhtml_branch_coverage=1 00:07:07.540 --rc genhtml_function_coverage=1 00:07:07.540 --rc genhtml_legend=1 00:07:07.540 --rc geninfo_all_blocks=1 00:07:07.540 --rc geninfo_unexecuted_blocks=1 00:07:07.540 00:07:07.540 ' 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.540 --rc genhtml_branch_coverage=1 00:07:07.540 --rc genhtml_function_coverage=1 00:07:07.540 --rc genhtml_legend=1 00:07:07.540 --rc geninfo_all_blocks=1 00:07:07.540 --rc geninfo_unexecuted_blocks=1 00:07:07.540 00:07:07.540 ' 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.540 --rc genhtml_branch_coverage=1 00:07:07.540 --rc genhtml_function_coverage=1 00:07:07.540 --rc genhtml_legend=1 00:07:07.540 --rc geninfo_all_blocks=1 00:07:07.540 --rc geninfo_unexecuted_blocks=1 00:07:07.540 00:07:07.540 ' 00:07:07.540 13:41:17 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.540 --rc genhtml_branch_coverage=1 00:07:07.540 --rc genhtml_function_coverage=1 00:07:07.540 --rc genhtml_legend=1 00:07:07.540 --rc geninfo_all_blocks=1 00:07:07.540 --rc geninfo_unexecuted_blocks=1 00:07:07.540 00:07:07.540 ' 00:07:07.540 13:41:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:07.799 OK 00:07:07.799 13:41:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:07.799 00:07:07.799 real 0m0.331s 00:07:07.799 user 0m0.172s 00:07:07.799 sys 0m0.176s 00:07:07.799 13:41:17 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.799 13:41:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:07.799 ************************************ 00:07:07.799 END TEST rpc_client 00:07:07.799 ************************************ 00:07:07.799 13:41:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:07.799 13:41:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.799 13:41:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.799 13:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:07.799 ************************************ 00:07:07.799 START TEST json_config 00:07:07.799 ************************************ 00:07:07.799 13:41:17 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:07.799 13:41:17 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.799 13:41:17 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.799 13:41:17 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.058 13:41:18 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.058 13:41:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.058 13:41:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.058 13:41:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.058 13:41:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.058 13:41:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.058 13:41:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.058 13:41:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.058 13:41:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:08.058 13:41:18 json_config -- scripts/common.sh@345 -- # : 1 00:07:08.058 13:41:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.058 13:41:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.058 13:41:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:08.058 13:41:18 json_config -- scripts/common.sh@353 -- # local d=1 00:07:08.058 13:41:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.058 13:41:18 json_config -- scripts/common.sh@355 -- # echo 1 00:07:08.058 13:41:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.058 13:41:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@353 -- # local d=2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.058 13:41:18 json_config -- scripts/common.sh@355 -- # echo 2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.058 13:41:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.058 13:41:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.058 13:41:18 json_config -- scripts/common.sh@368 -- # return 0 00:07:08.059 13:41:18 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.059 13:41:18 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.059 --rc genhtml_branch_coverage=1 00:07:08.059 --rc genhtml_function_coverage=1 00:07:08.059 --rc genhtml_legend=1 00:07:08.059 --rc geninfo_all_blocks=1 00:07:08.059 --rc geninfo_unexecuted_blocks=1 00:07:08.059 00:07:08.059 ' 00:07:08.059 13:41:18 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.059 --rc genhtml_branch_coverage=1 00:07:08.059 --rc genhtml_function_coverage=1 00:07:08.059 --rc genhtml_legend=1 00:07:08.059 --rc geninfo_all_blocks=1 00:07:08.059 --rc geninfo_unexecuted_blocks=1 00:07:08.059 00:07:08.059 ' 00:07:08.059 13:41:18 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.059 --rc genhtml_branch_coverage=1 00:07:08.059 --rc genhtml_function_coverage=1 00:07:08.059 --rc genhtml_legend=1 00:07:08.059 --rc geninfo_all_blocks=1 00:07:08.059 --rc geninfo_unexecuted_blocks=1 00:07:08.059 00:07:08.059 ' 00:07:08.059 13:41:18 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.059 --rc genhtml_branch_coverage=1 00:07:08.059 --rc genhtml_function_coverage=1 00:07:08.059 --rc genhtml_legend=1 00:07:08.059 --rc geninfo_all_blocks=1 00:07:08.059 --rc geninfo_unexecuted_blocks=1 00:07:08.059 00:07:08.059 ' 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:70e69b5c-9e77-4517-915c-f036209d1fdb 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=70e69b5c-9e77-4517-915c-f036209d1fdb 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.059 13:41:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.059 13:41:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.059 13:41:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.059 13:41:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.059 13:41:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.059 13:41:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.059 13:41:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.059 13:41:18 json_config -- paths/export.sh@5 -- # export PATH 00:07:08.059 13:41:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@51 -- # : 0 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.059 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.059 13:41:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:08.059 WARNING: No tests are enabled so not running JSON configuration tests 00:07:08.059 13:41:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:08.059 00:07:08.059 real 0m0.241s 00:07:08.059 user 0m0.149s 00:07:08.059 sys 0m0.090s 00:07:08.059 13:41:18 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.059 13:41:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.059 ************************************ 00:07:08.059 END TEST json_config 00:07:08.059 ************************************ 00:07:08.059 13:41:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:08.059 13:41:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.059 13:41:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.059 13:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:08.059 ************************************ 00:07:08.059 START TEST json_config_extra_key 00:07:08.059 ************************************ 00:07:08.059 13:41:18 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.319 13:41:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.319 --rc genhtml_branch_coverage=1 00:07:08.319 --rc genhtml_function_coverage=1 00:07:08.319 --rc genhtml_legend=1 00:07:08.319 --rc geninfo_all_blocks=1 00:07:08.319 --rc geninfo_unexecuted_blocks=1 00:07:08.319 00:07:08.319 ' 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.319 --rc genhtml_branch_coverage=1 00:07:08.319 --rc genhtml_function_coverage=1 00:07:08.319 --rc genhtml_legend=1 00:07:08.319 --rc geninfo_all_blocks=1 00:07:08.319 --rc geninfo_unexecuted_blocks=1 00:07:08.319 00:07:08.319 ' 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.319 --rc genhtml_branch_coverage=1 00:07:08.319 --rc genhtml_function_coverage=1 00:07:08.319 --rc genhtml_legend=1 00:07:08.319 --rc geninfo_all_blocks=1 00:07:08.319 --rc geninfo_unexecuted_blocks=1 00:07:08.319 00:07:08.319 ' 00:07:08.319 13:41:18 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.319 --rc genhtml_branch_coverage=1 00:07:08.319 --rc genhtml_function_coverage=1 00:07:08.319 --rc genhtml_legend=1 00:07:08.319 --rc geninfo_all_blocks=1 00:07:08.319 --rc geninfo_unexecuted_blocks=1 00:07:08.319 00:07:08.319 ' 00:07:08.319 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.319 13:41:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:08.319 13:41:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.319 13:41:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.319 13:41:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:70e69b5c-9e77-4517-915c-f036209d1fdb 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=70e69b5c-9e77-4517-915c-f036209d1fdb 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.320 13:41:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.320 13:41:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.320 13:41:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.320 13:41:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.320 13:41:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.320 13:41:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.320 13:41:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.320 13:41:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:08.320 13:41:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.320 13:41:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:08.320 INFO: launching applications... 00:07:08.320 13:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57439 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:08.320 Waiting for target to run... 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57439 /var/tmp/spdk_tgt.sock 00:07:08.320 13:41:18 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57439 ']' 00:07:08.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:08.320 13:41:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:08.320 13:41:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:08.320 13:41:18 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.320 13:41:18 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:08.320 13:41:18 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.320 13:41:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:08.579 [2024-10-01 13:41:18.518067] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:08.580 [2024-10-01 13:41:18.518204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57439 ] 00:07:08.838 [2024-10-01 13:41:18.916510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.096 [2024-10-01 13:41:19.114121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.664 00:07:09.664 INFO: shutting down applications... 00:07:09.664 13:41:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.664 13:41:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:09.664 13:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:09.664 13:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57439 ]] 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57439 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57439 00:07:09.664 13:41:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:10.230 13:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:10.230 13:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:10.230 13:41:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57439 00:07:10.230 13:41:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:10.794 13:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:10.794 13:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:10.794 13:41:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57439 00:07:10.794 13:41:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:11.363 13:41:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:11.363 13:41:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.363 13:41:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57439 00:07:11.363 13:41:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:11.933 13:41:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:11.933 13:41:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.933 13:41:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57439 00:07:11.933 13:41:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.191 13:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.191 13:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.192 13:41:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57439 00:07:12.192 13:41:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.759 13:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.759 13:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.759 13:41:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57439 00:07:12.759 13:41:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:12.759 13:41:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:12.759 13:41:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:12.759 SPDK target shutdown done 00:07:12.759 13:41:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:12.759 Success 00:07:12.759 13:41:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:12.759 ************************************ 00:07:12.759 END TEST json_config_extra_key 00:07:12.759 ************************************ 00:07:12.759 00:07:12.759 real 0m4.706s 00:07:12.759 user 0m4.290s 00:07:12.759 sys 0m0.611s 00:07:12.759 13:41:22 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.759 13:41:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:12.759 13:41:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:12.759 13:41:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.759 13:41:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.759 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:07:13.019 ************************************ 00:07:13.019 START TEST alias_rpc 00:07:13.019 ************************************ 00:07:13.019 13:41:22 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:13.019 * Looking for test storage... 00:07:13.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.019 13:41:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:13.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.019 --rc genhtml_branch_coverage=1 00:07:13.019 --rc genhtml_function_coverage=1 00:07:13.019 --rc genhtml_legend=1 00:07:13.019 --rc geninfo_all_blocks=1 00:07:13.019 --rc geninfo_unexecuted_blocks=1 00:07:13.019 00:07:13.019 ' 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:13.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.019 --rc genhtml_branch_coverage=1 00:07:13.019 --rc genhtml_function_coverage=1 00:07:13.019 --rc genhtml_legend=1 00:07:13.019 --rc geninfo_all_blocks=1 00:07:13.019 --rc geninfo_unexecuted_blocks=1 00:07:13.019 00:07:13.019 ' 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:13.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.019 --rc genhtml_branch_coverage=1 00:07:13.019 --rc genhtml_function_coverage=1 00:07:13.019 --rc genhtml_legend=1 00:07:13.019 --rc geninfo_all_blocks=1 00:07:13.019 --rc geninfo_unexecuted_blocks=1 00:07:13.019 00:07:13.019 ' 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:13.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.019 --rc genhtml_branch_coverage=1 00:07:13.019 --rc genhtml_function_coverage=1 00:07:13.019 --rc genhtml_legend=1 00:07:13.019 --rc geninfo_all_blocks=1 00:07:13.019 --rc geninfo_unexecuted_blocks=1 00:07:13.019 00:07:13.019 ' 00:07:13.019 13:41:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.019 13:41:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:13.019 13:41:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57555 00:07:13.019 13:41:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57555 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57555 ']' 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.019 13:41:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.301 [2024-10-01 13:41:23.286163] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:13.301 [2024-10-01 13:41:23.286503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57555 ] 00:07:13.301 [2024-10-01 13:41:23.465991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.562 [2024-10-01 13:41:23.686177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.546 13:41:24 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.546 13:41:24 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:14.546 13:41:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:14.809 13:41:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57555 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57555 ']' 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57555 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57555 00:07:14.809 killing process with pid 57555 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57555' 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@969 -- # kill 57555 00:07:14.809 13:41:24 alias_rpc -- common/autotest_common.sh@974 -- # wait 57555 00:07:17.388 ************************************ 00:07:17.388 END TEST alias_rpc 00:07:17.388 ************************************ 00:07:17.388 00:07:17.388 real 0m4.498s 00:07:17.388 user 0m4.462s 00:07:17.388 sys 0m0.619s 00:07:17.388 13:41:27 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.389 13:41:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.389 13:41:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:17.389 13:41:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:17.389 13:41:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.389 13:41:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.389 13:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.389 ************************************ 00:07:17.389 START TEST spdkcli_tcp 00:07:17.389 ************************************ 00:07:17.389 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:17.647 * Looking for test storage... 00:07:17.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:17.647 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:17.647 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:17.647 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:17.647 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.647 13:41:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.648 13:41:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.648 --rc genhtml_branch_coverage=1 00:07:17.648 --rc genhtml_function_coverage=1 00:07:17.648 --rc genhtml_legend=1 00:07:17.648 --rc geninfo_all_blocks=1 00:07:17.648 --rc geninfo_unexecuted_blocks=1 00:07:17.648 00:07:17.648 ' 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.648 --rc genhtml_branch_coverage=1 00:07:17.648 --rc genhtml_function_coverage=1 00:07:17.648 --rc genhtml_legend=1 00:07:17.648 --rc geninfo_all_blocks=1 00:07:17.648 --rc geninfo_unexecuted_blocks=1 00:07:17.648 00:07:17.648 ' 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.648 --rc genhtml_branch_coverage=1 00:07:17.648 --rc genhtml_function_coverage=1 00:07:17.648 --rc genhtml_legend=1 00:07:17.648 --rc geninfo_all_blocks=1 00:07:17.648 --rc geninfo_unexecuted_blocks=1 00:07:17.648 00:07:17.648 ' 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.648 --rc genhtml_branch_coverage=1 00:07:17.648 --rc genhtml_function_coverage=1 00:07:17.648 --rc genhtml_legend=1 00:07:17.648 --rc geninfo_all_blocks=1 00:07:17.648 --rc geninfo_unexecuted_blocks=1 00:07:17.648 00:07:17.648 ' 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57663 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57663 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57663 ']' 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.648 13:41:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.648 13:41:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:17.648 [2024-10-01 13:41:27.813765] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:17.648 [2024-10-01 13:41:27.813895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57663 ] 00:07:17.909 [2024-10-01 13:41:27.987672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.168 [2024-10-01 13:41:28.208737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.168 [2024-10-01 13:41:28.208773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.103 13:41:29 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.103 13:41:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:19.104 13:41:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57686 00:07:19.104 13:41:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:19.104 13:41:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:19.363 [ 00:07:19.363 "bdev_malloc_delete", 00:07:19.363 "bdev_malloc_create", 00:07:19.363 "bdev_null_resize", 00:07:19.363 "bdev_null_delete", 00:07:19.363 "bdev_null_create", 00:07:19.363 "bdev_nvme_cuse_unregister", 00:07:19.363 "bdev_nvme_cuse_register", 00:07:19.363 "bdev_opal_new_user", 00:07:19.363 "bdev_opal_set_lock_state", 00:07:19.363 "bdev_opal_delete", 00:07:19.363 "bdev_opal_get_info", 00:07:19.363 "bdev_opal_create", 00:07:19.363 "bdev_nvme_opal_revert", 00:07:19.363 "bdev_nvme_opal_init", 00:07:19.363 "bdev_nvme_send_cmd", 00:07:19.363 "bdev_nvme_set_keys", 00:07:19.363 "bdev_nvme_get_path_iostat", 00:07:19.363 "bdev_nvme_get_mdns_discovery_info", 00:07:19.363 "bdev_nvme_stop_mdns_discovery", 00:07:19.363 "bdev_nvme_start_mdns_discovery", 00:07:19.363 "bdev_nvme_set_multipath_policy", 00:07:19.363 "bdev_nvme_set_preferred_path", 00:07:19.363 "bdev_nvme_get_io_paths", 00:07:19.363 "bdev_nvme_remove_error_injection", 00:07:19.363 "bdev_nvme_add_error_injection", 00:07:19.363 "bdev_nvme_get_discovery_info", 00:07:19.363 "bdev_nvme_stop_discovery", 00:07:19.363 "bdev_nvme_start_discovery", 00:07:19.363 "bdev_nvme_get_controller_health_info", 00:07:19.363 "bdev_nvme_disable_controller", 00:07:19.363 "bdev_nvme_enable_controller", 00:07:19.363 "bdev_nvme_reset_controller", 00:07:19.363 "bdev_nvme_get_transport_statistics", 00:07:19.363 "bdev_nvme_apply_firmware", 00:07:19.363 "bdev_nvme_detach_controller", 00:07:19.363 "bdev_nvme_get_controllers", 00:07:19.363 "bdev_nvme_attach_controller", 00:07:19.363 "bdev_nvme_set_hotplug", 00:07:19.363 "bdev_nvme_set_options", 00:07:19.363 "bdev_passthru_delete", 00:07:19.363 "bdev_passthru_create", 00:07:19.363 "bdev_lvol_set_parent_bdev", 00:07:19.363 "bdev_lvol_set_parent", 00:07:19.363 "bdev_lvol_check_shallow_copy", 00:07:19.363 "bdev_lvol_start_shallow_copy", 00:07:19.363 "bdev_lvol_grow_lvstore", 00:07:19.363 "bdev_lvol_get_lvols", 00:07:19.363 "bdev_lvol_get_lvstores", 00:07:19.363 "bdev_lvol_delete", 00:07:19.363 "bdev_lvol_set_read_only", 00:07:19.363 "bdev_lvol_resize", 00:07:19.363 "bdev_lvol_decouple_parent", 00:07:19.363 "bdev_lvol_inflate", 00:07:19.363 "bdev_lvol_rename", 00:07:19.363 "bdev_lvol_clone_bdev", 00:07:19.363 "bdev_lvol_clone", 00:07:19.363 "bdev_lvol_snapshot", 00:07:19.363 "bdev_lvol_create", 00:07:19.363 "bdev_lvol_delete_lvstore", 00:07:19.363 "bdev_lvol_rename_lvstore", 00:07:19.363 "bdev_lvol_create_lvstore", 00:07:19.363 "bdev_raid_set_options", 00:07:19.363 "bdev_raid_remove_base_bdev", 00:07:19.363 "bdev_raid_add_base_bdev", 00:07:19.363 "bdev_raid_delete", 00:07:19.363 "bdev_raid_create", 00:07:19.363 "bdev_raid_get_bdevs", 00:07:19.363 "bdev_error_inject_error", 00:07:19.363 "bdev_error_delete", 00:07:19.363 "bdev_error_create", 00:07:19.363 "bdev_split_delete", 00:07:19.363 "bdev_split_create", 00:07:19.363 "bdev_delay_delete", 00:07:19.363 "bdev_delay_create", 00:07:19.363 "bdev_delay_update_latency", 00:07:19.363 "bdev_zone_block_delete", 00:07:19.363 "bdev_zone_block_create", 00:07:19.363 "blobfs_create", 00:07:19.363 "blobfs_detect", 00:07:19.363 "blobfs_set_cache_size", 00:07:19.363 "bdev_aio_delete", 00:07:19.363 "bdev_aio_rescan", 00:07:19.363 "bdev_aio_create", 00:07:19.363 "bdev_ftl_set_property", 00:07:19.363 "bdev_ftl_get_properties", 00:07:19.363 "bdev_ftl_get_stats", 00:07:19.363 "bdev_ftl_unmap", 00:07:19.363 "bdev_ftl_unload", 00:07:19.363 "bdev_ftl_delete", 00:07:19.363 "bdev_ftl_load", 00:07:19.363 "bdev_ftl_create", 00:07:19.363 "bdev_virtio_attach_controller", 00:07:19.363 "bdev_virtio_scsi_get_devices", 00:07:19.363 "bdev_virtio_detach_controller", 00:07:19.363 "bdev_virtio_blk_set_hotplug", 00:07:19.363 "bdev_iscsi_delete", 00:07:19.363 "bdev_iscsi_create", 00:07:19.363 "bdev_iscsi_set_options", 00:07:19.363 "accel_error_inject_error", 00:07:19.363 "ioat_scan_accel_module", 00:07:19.363 "dsa_scan_accel_module", 00:07:19.363 "iaa_scan_accel_module", 00:07:19.363 "keyring_file_remove_key", 00:07:19.363 "keyring_file_add_key", 00:07:19.363 "keyring_linux_set_options", 00:07:19.363 "fsdev_aio_delete", 00:07:19.363 "fsdev_aio_create", 00:07:19.363 "iscsi_get_histogram", 00:07:19.363 "iscsi_enable_histogram", 00:07:19.363 "iscsi_set_options", 00:07:19.363 "iscsi_get_auth_groups", 00:07:19.363 "iscsi_auth_group_remove_secret", 00:07:19.363 "iscsi_auth_group_add_secret", 00:07:19.363 "iscsi_delete_auth_group", 00:07:19.363 "iscsi_create_auth_group", 00:07:19.363 "iscsi_set_discovery_auth", 00:07:19.363 "iscsi_get_options", 00:07:19.363 "iscsi_target_node_request_logout", 00:07:19.363 "iscsi_target_node_set_redirect", 00:07:19.363 "iscsi_target_node_set_auth", 00:07:19.363 "iscsi_target_node_add_lun", 00:07:19.363 "iscsi_get_stats", 00:07:19.363 "iscsi_get_connections", 00:07:19.363 "iscsi_portal_group_set_auth", 00:07:19.363 "iscsi_start_portal_group", 00:07:19.363 "iscsi_delete_portal_group", 00:07:19.363 "iscsi_create_portal_group", 00:07:19.363 "iscsi_get_portal_groups", 00:07:19.363 "iscsi_delete_target_node", 00:07:19.363 "iscsi_target_node_remove_pg_ig_maps", 00:07:19.363 "iscsi_target_node_add_pg_ig_maps", 00:07:19.363 "iscsi_create_target_node", 00:07:19.363 "iscsi_get_target_nodes", 00:07:19.363 "iscsi_delete_initiator_group", 00:07:19.363 "iscsi_initiator_group_remove_initiators", 00:07:19.363 "iscsi_initiator_group_add_initiators", 00:07:19.363 "iscsi_create_initiator_group", 00:07:19.363 "iscsi_get_initiator_groups", 00:07:19.363 "nvmf_set_crdt", 00:07:19.363 "nvmf_set_config", 00:07:19.363 "nvmf_set_max_subsystems", 00:07:19.363 "nvmf_stop_mdns_prr", 00:07:19.363 "nvmf_publish_mdns_prr", 00:07:19.363 "nvmf_subsystem_get_listeners", 00:07:19.363 "nvmf_subsystem_get_qpairs", 00:07:19.363 "nvmf_subsystem_get_controllers", 00:07:19.363 "nvmf_get_stats", 00:07:19.363 "nvmf_get_transports", 00:07:19.363 "nvmf_create_transport", 00:07:19.363 "nvmf_get_targets", 00:07:19.363 "nvmf_delete_target", 00:07:19.363 "nvmf_create_target", 00:07:19.363 "nvmf_subsystem_allow_any_host", 00:07:19.363 "nvmf_subsystem_set_keys", 00:07:19.363 "nvmf_subsystem_remove_host", 00:07:19.363 "nvmf_subsystem_add_host", 00:07:19.363 "nvmf_ns_remove_host", 00:07:19.363 "nvmf_ns_add_host", 00:07:19.363 "nvmf_subsystem_remove_ns", 00:07:19.363 "nvmf_subsystem_set_ns_ana_group", 00:07:19.363 "nvmf_subsystem_add_ns", 00:07:19.363 "nvmf_subsystem_listener_set_ana_state", 00:07:19.363 "nvmf_discovery_get_referrals", 00:07:19.363 "nvmf_discovery_remove_referral", 00:07:19.363 "nvmf_discovery_add_referral", 00:07:19.363 "nvmf_subsystem_remove_listener", 00:07:19.363 "nvmf_subsystem_add_listener", 00:07:19.363 "nvmf_delete_subsystem", 00:07:19.363 "nvmf_create_subsystem", 00:07:19.363 "nvmf_get_subsystems", 00:07:19.363 "env_dpdk_get_mem_stats", 00:07:19.363 "nbd_get_disks", 00:07:19.363 "nbd_stop_disk", 00:07:19.363 "nbd_start_disk", 00:07:19.363 "ublk_recover_disk", 00:07:19.363 "ublk_get_disks", 00:07:19.363 "ublk_stop_disk", 00:07:19.363 "ublk_start_disk", 00:07:19.363 "ublk_destroy_target", 00:07:19.363 "ublk_create_target", 00:07:19.363 "virtio_blk_create_transport", 00:07:19.363 "virtio_blk_get_transports", 00:07:19.363 "vhost_controller_set_coalescing", 00:07:19.363 "vhost_get_controllers", 00:07:19.363 "vhost_delete_controller", 00:07:19.363 "vhost_create_blk_controller", 00:07:19.363 "vhost_scsi_controller_remove_target", 00:07:19.363 "vhost_scsi_controller_add_target", 00:07:19.363 "vhost_start_scsi_controller", 00:07:19.363 "vhost_create_scsi_controller", 00:07:19.363 "thread_set_cpumask", 00:07:19.363 "scheduler_set_options", 00:07:19.363 "framework_get_governor", 00:07:19.363 "framework_get_scheduler", 00:07:19.363 "framework_set_scheduler", 00:07:19.363 "framework_get_reactors", 00:07:19.363 "thread_get_io_channels", 00:07:19.363 "thread_get_pollers", 00:07:19.363 "thread_get_stats", 00:07:19.363 "framework_monitor_context_switch", 00:07:19.363 "spdk_kill_instance", 00:07:19.363 "log_enable_timestamps", 00:07:19.363 "log_get_flags", 00:07:19.363 "log_clear_flag", 00:07:19.363 "log_set_flag", 00:07:19.363 "log_get_level", 00:07:19.363 "log_set_level", 00:07:19.363 "log_get_print_level", 00:07:19.363 "log_set_print_level", 00:07:19.363 "framework_enable_cpumask_locks", 00:07:19.363 "framework_disable_cpumask_locks", 00:07:19.363 "framework_wait_init", 00:07:19.363 "framework_start_init", 00:07:19.363 "scsi_get_devices", 00:07:19.363 "bdev_get_histogram", 00:07:19.363 "bdev_enable_histogram", 00:07:19.363 "bdev_set_qos_limit", 00:07:19.363 "bdev_set_qd_sampling_period", 00:07:19.363 "bdev_get_bdevs", 00:07:19.363 "bdev_reset_iostat", 00:07:19.363 "bdev_get_iostat", 00:07:19.363 "bdev_examine", 00:07:19.363 "bdev_wait_for_examine", 00:07:19.363 "bdev_set_options", 00:07:19.363 "accel_get_stats", 00:07:19.363 "accel_set_options", 00:07:19.363 "accel_set_driver", 00:07:19.363 "accel_crypto_key_destroy", 00:07:19.363 "accel_crypto_keys_get", 00:07:19.363 "accel_crypto_key_create", 00:07:19.363 "accel_assign_opc", 00:07:19.363 "accel_get_module_info", 00:07:19.363 "accel_get_opc_assignments", 00:07:19.363 "vmd_rescan", 00:07:19.363 "vmd_remove_device", 00:07:19.363 "vmd_enable", 00:07:19.364 "sock_get_default_impl", 00:07:19.364 "sock_set_default_impl", 00:07:19.364 "sock_impl_set_options", 00:07:19.364 "sock_impl_get_options", 00:07:19.364 "iobuf_get_stats", 00:07:19.364 "iobuf_set_options", 00:07:19.364 "keyring_get_keys", 00:07:19.364 "framework_get_pci_devices", 00:07:19.364 "framework_get_config", 00:07:19.364 "framework_get_subsystems", 00:07:19.364 "fsdev_set_opts", 00:07:19.364 "fsdev_get_opts", 00:07:19.364 "trace_get_info", 00:07:19.364 "trace_get_tpoint_group_mask", 00:07:19.364 "trace_disable_tpoint_group", 00:07:19.364 "trace_enable_tpoint_group", 00:07:19.364 "trace_clear_tpoint_mask", 00:07:19.364 "trace_set_tpoint_mask", 00:07:19.364 "notify_get_notifications", 00:07:19.364 "notify_get_types", 00:07:19.364 "spdk_get_version", 00:07:19.364 "rpc_get_methods" 00:07:19.364 ] 00:07:19.364 13:41:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.364 13:41:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:19.364 13:41:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57663 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57663 ']' 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57663 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57663 00:07:19.364 killing process with pid 57663 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57663' 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57663 00:07:19.364 13:41:29 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57663 00:07:22.697 ************************************ 00:07:22.697 END TEST spdkcli_tcp 00:07:22.697 ************************************ 00:07:22.697 00:07:22.697 real 0m4.599s 00:07:22.697 user 0m8.117s 00:07:22.697 sys 0m0.655s 00:07:22.697 13:41:32 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.697 13:41:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.697 13:41:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:22.697 13:41:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.697 13:41:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.697 13:41:32 -- common/autotest_common.sh@10 -- # set +x 00:07:22.697 ************************************ 00:07:22.697 START TEST dpdk_mem_utility 00:07:22.697 ************************************ 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:22.697 * Looking for test storage... 00:07:22.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.697 13:41:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.697 --rc genhtml_branch_coverage=1 00:07:22.697 --rc genhtml_function_coverage=1 00:07:22.697 --rc genhtml_legend=1 00:07:22.697 --rc geninfo_all_blocks=1 00:07:22.697 --rc geninfo_unexecuted_blocks=1 00:07:22.697 00:07:22.697 ' 00:07:22.697 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.697 --rc genhtml_branch_coverage=1 00:07:22.697 --rc genhtml_function_coverage=1 00:07:22.697 --rc genhtml_legend=1 00:07:22.697 --rc geninfo_all_blocks=1 00:07:22.698 --rc geninfo_unexecuted_blocks=1 00:07:22.698 00:07:22.698 ' 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.698 --rc genhtml_branch_coverage=1 00:07:22.698 --rc genhtml_function_coverage=1 00:07:22.698 --rc genhtml_legend=1 00:07:22.698 --rc geninfo_all_blocks=1 00:07:22.698 --rc geninfo_unexecuted_blocks=1 00:07:22.698 00:07:22.698 ' 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.698 --rc genhtml_branch_coverage=1 00:07:22.698 --rc genhtml_function_coverage=1 00:07:22.698 --rc genhtml_legend=1 00:07:22.698 --rc geninfo_all_blocks=1 00:07:22.698 --rc geninfo_unexecuted_blocks=1 00:07:22.698 00:07:22.698 ' 00:07:22.698 13:41:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:22.698 13:41:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.698 13:41:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57791 00:07:22.698 13:41:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57791 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57791 ']' 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.698 13:41:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:22.698 [2024-10-01 13:41:32.524932] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:22.698 [2024-10-01 13:41:32.525082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57791 ] 00:07:22.698 [2024-10-01 13:41:32.728230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.968 [2024-10-01 13:41:32.951772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.909 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.909 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:23.909 13:41:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:23.909 13:41:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:23.909 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.909 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:23.909 { 00:07:23.909 "filename": "/tmp/spdk_mem_dump.txt" 00:07:23.909 } 00:07:23.909 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.909 13:41:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:23.909 DPDK memory size 866.000000 MiB in 1 heap(s) 00:07:23.909 1 heaps totaling size 866.000000 MiB 00:07:23.909 size: 866.000000 MiB heap id: 0 00:07:23.909 end heaps---------- 00:07:23.909 9 mempools totaling size 642.649841 MiB 00:07:23.909 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:23.909 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:23.909 size: 92.545471 MiB name: bdev_io_57791 00:07:23.909 size: 51.011292 MiB name: evtpool_57791 00:07:23.909 size: 50.003479 MiB name: msgpool_57791 00:07:23.909 size: 36.509338 MiB name: fsdev_io_57791 00:07:23.909 size: 21.763794 MiB name: PDU_Pool 00:07:23.909 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:23.909 size: 0.026123 MiB name: Session_Pool 00:07:23.909 end mempools------- 00:07:23.909 6 memzones totaling size 4.142822 MiB 00:07:23.909 size: 1.000366 MiB name: RG_ring_0_57791 00:07:23.909 size: 1.000366 MiB name: RG_ring_1_57791 00:07:23.909 size: 1.000366 MiB name: RG_ring_4_57791 00:07:23.909 size: 1.000366 MiB name: RG_ring_5_57791 00:07:23.909 size: 0.125366 MiB name: RG_ring_2_57791 00:07:23.909 size: 0.015991 MiB name: RG_ring_3_57791 00:07:23.909 end memzones------- 00:07:23.909 13:41:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:23.909 heap id: 0 total size: 866.000000 MiB number of busy elements: 311 number of free elements: 19 00:07:23.909 list of free elements. size: 19.914551 MiB 00:07:23.909 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:23.909 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:23.909 element at address: 0x200009600000 with size: 1.995972 MiB 00:07:23.909 element at address: 0x20000d800000 with size: 1.995972 MiB 00:07:23.909 element at address: 0x200007000000 with size: 1.991028 MiB 00:07:23.909 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:07:23.909 element at address: 0x20001c300040 with size: 0.999939 MiB 00:07:23.909 element at address: 0x20001c400000 with size: 0.999084 MiB 00:07:23.909 element at address: 0x200035000000 with size: 0.994324 MiB 00:07:23.909 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:07:23.909 element at address: 0x20001c700040 with size: 0.936401 MiB 00:07:23.909 element at address: 0x200000200000 with size: 0.831909 MiB 00:07:23.909 element at address: 0x20001de00000 with size: 0.562195 MiB 00:07:23.909 element at address: 0x200003e00000 with size: 0.490417 MiB 00:07:23.909 element at address: 0x20001c000000 with size: 0.489197 MiB 00:07:23.909 element at address: 0x20001c800000 with size: 0.485413 MiB 00:07:23.909 element at address: 0x200015e00000 with size: 0.443481 MiB 00:07:23.909 element at address: 0x20002b200000 with size: 0.390442 MiB 00:07:23.909 element at address: 0x200003a00000 with size: 0.352844 MiB 00:07:23.909 list of standard malloc elements. size: 199.286743 MiB 00:07:23.909 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:07:23.909 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:07:23.909 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:07:23.909 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:07:23.909 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:07:23.909 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:23.910 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:07:23.910 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:23.910 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:07:23.910 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:07:23.910 element at address: 0x200015dff040 with size: 0.000305 MiB 00:07:23.910 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003aff700 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff180 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff280 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff380 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff480 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff580 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff680 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff780 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff880 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dff980 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71880 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71980 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e72080 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015e72180 with size: 0.000244 MiB 00:07:23.910 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:07:23.910 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b264040 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:07:23.911 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:07:23.911 list of memzone associated elements. size: 646.798706 MiB 00:07:23.911 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:07:23.911 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:23.911 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:07:23.912 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:23.912 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:07:23.912 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57791_0 00:07:23.912 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:23.912 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57791_0 00:07:23.912 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:23.912 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57791_0 00:07:23.912 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:07:23.912 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57791_0 00:07:23.912 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:07:23.912 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:23.912 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:07:23.912 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:23.912 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:23.912 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57791 00:07:23.912 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:23.912 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57791 00:07:23.912 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:23.912 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57791 00:07:23.912 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:07:23.912 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:23.912 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:07:23.912 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:23.912 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:07:23.912 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:23.912 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:07:23.912 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:23.912 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:23.912 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57791 00:07:23.912 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:23.912 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57791 00:07:23.912 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:07:23.912 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57791 00:07:23.912 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:07:23.912 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57791 00:07:23.912 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:07:23.912 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57791 00:07:23.912 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:07:23.912 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57791 00:07:23.912 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:07:23.912 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:23.912 element at address: 0x200015e72280 with size: 0.500549 MiB 00:07:23.912 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:23.912 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:07:23.912 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:23.912 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:07:23.912 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57791 00:07:23.912 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:07:23.912 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:23.912 element at address: 0x20002b264140 with size: 0.023804 MiB 00:07:23.912 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:23.912 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:07:23.912 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57791 00:07:23.912 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:07:23.912 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:23.912 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:07:23.912 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57791 00:07:23.912 element at address: 0x200003aff800 with size: 0.000366 MiB 00:07:23.912 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57791 00:07:23.912 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:07:23.912 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57791 00:07:23.912 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:07:23.912 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:23.912 13:41:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:23.912 13:41:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57791 00:07:23.912 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57791 ']' 00:07:23.912 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57791 00:07:23.912 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:23.912 13:41:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.912 13:41:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57791 00:07:23.912 13:41:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.912 13:41:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.912 killing process with pid 57791 00:07:23.912 13:41:34 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57791' 00:07:23.912 13:41:34 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57791 00:07:23.912 13:41:34 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57791 00:07:27.192 00:07:27.192 real 0m4.597s 00:07:27.192 user 0m4.470s 00:07:27.192 sys 0m0.626s 00:07:27.192 13:41:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.192 13:41:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:27.192 ************************************ 00:07:27.192 END TEST dpdk_mem_utility 00:07:27.192 ************************************ 00:07:27.192 13:41:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:27.192 13:41:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.192 13:41:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.192 13:41:36 -- common/autotest_common.sh@10 -- # set +x 00:07:27.192 ************************************ 00:07:27.192 START TEST event 00:07:27.192 ************************************ 00:07:27.192 13:41:36 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:27.192 * Looking for test storage... 00:07:27.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:27.192 13:41:36 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:27.192 13:41:36 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:27.192 13:41:36 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:27.192 13:41:37 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:27.192 13:41:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.192 13:41:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.192 13:41:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.192 13:41:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.192 13:41:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.192 13:41:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.192 13:41:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.192 13:41:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.192 13:41:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.192 13:41:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.192 13:41:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.192 13:41:37 event -- scripts/common.sh@344 -- # case "$op" in 00:07:27.192 13:41:37 event -- scripts/common.sh@345 -- # : 1 00:07:27.192 13:41:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.192 13:41:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.193 13:41:37 event -- scripts/common.sh@365 -- # decimal 1 00:07:27.193 13:41:37 event -- scripts/common.sh@353 -- # local d=1 00:07:27.193 13:41:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.193 13:41:37 event -- scripts/common.sh@355 -- # echo 1 00:07:27.193 13:41:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.193 13:41:37 event -- scripts/common.sh@366 -- # decimal 2 00:07:27.193 13:41:37 event -- scripts/common.sh@353 -- # local d=2 00:07:27.193 13:41:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.193 13:41:37 event -- scripts/common.sh@355 -- # echo 2 00:07:27.193 13:41:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.193 13:41:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.193 13:41:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.193 13:41:37 event -- scripts/common.sh@368 -- # return 0 00:07:27.193 13:41:37 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.193 13:41:37 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:27.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.193 --rc genhtml_branch_coverage=1 00:07:27.193 --rc genhtml_function_coverage=1 00:07:27.193 --rc genhtml_legend=1 00:07:27.193 --rc geninfo_all_blocks=1 00:07:27.193 --rc geninfo_unexecuted_blocks=1 00:07:27.193 00:07:27.193 ' 00:07:27.193 13:41:37 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:27.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.193 --rc genhtml_branch_coverage=1 00:07:27.193 --rc genhtml_function_coverage=1 00:07:27.193 --rc genhtml_legend=1 00:07:27.193 --rc geninfo_all_blocks=1 00:07:27.193 --rc geninfo_unexecuted_blocks=1 00:07:27.193 00:07:27.193 ' 00:07:27.193 13:41:37 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:27.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.193 --rc genhtml_branch_coverage=1 00:07:27.193 --rc genhtml_function_coverage=1 00:07:27.193 --rc genhtml_legend=1 00:07:27.193 --rc geninfo_all_blocks=1 00:07:27.193 --rc geninfo_unexecuted_blocks=1 00:07:27.193 00:07:27.193 ' 00:07:27.193 13:41:37 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:27.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.193 --rc genhtml_branch_coverage=1 00:07:27.193 --rc genhtml_function_coverage=1 00:07:27.193 --rc genhtml_legend=1 00:07:27.193 --rc geninfo_all_blocks=1 00:07:27.193 --rc geninfo_unexecuted_blocks=1 00:07:27.193 00:07:27.193 ' 00:07:27.193 13:41:37 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:27.193 13:41:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:27.193 13:41:37 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:27.193 13:41:37 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:27.193 13:41:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.193 13:41:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.193 ************************************ 00:07:27.193 START TEST event_perf 00:07:27.193 ************************************ 00:07:27.193 13:41:37 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:27.193 Running I/O for 1 seconds...[2024-10-01 13:41:37.080446] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:27.193 [2024-10-01 13:41:37.080656] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57904 ] 00:07:27.193 [2024-10-01 13:41:37.261364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.452 [2024-10-01 13:41:37.504291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.452 [2024-10-01 13:41:37.504458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.452 [2024-10-01 13:41:37.505438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.452 [2024-10-01 13:41:37.505461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.832 Running I/O for 1 seconds... 00:07:28.832 lcore 0: 193410 00:07:28.832 lcore 1: 193410 00:07:28.832 lcore 2: 193411 00:07:28.832 lcore 3: 193410 00:07:28.832 done. 00:07:28.832 00:07:28.832 real 0m1.885s 00:07:28.832 user 0m4.594s 00:07:28.832 sys 0m0.161s 00:07:28.832 13:41:38 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.832 13:41:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.832 ************************************ 00:07:28.832 END TEST event_perf 00:07:28.832 ************************************ 00:07:28.832 13:41:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:28.832 13:41:38 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:28.832 13:41:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.832 13:41:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.832 ************************************ 00:07:28.832 START TEST event_reactor 00:07:28.832 ************************************ 00:07:28.832 13:41:38 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:29.090 [2024-10-01 13:41:39.037603] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:29.090 [2024-10-01 13:41:39.037742] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57948 ] 00:07:29.090 [2024-10-01 13:41:39.216194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.350 [2024-10-01 13:41:39.431297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.723 test_start 00:07:30.723 oneshot 00:07:30.723 tick 100 00:07:30.723 tick 100 00:07:30.723 tick 250 00:07:30.723 tick 100 00:07:30.723 tick 100 00:07:30.723 tick 250 00:07:30.723 tick 100 00:07:30.723 tick 500 00:07:30.723 tick 100 00:07:30.723 tick 100 00:07:30.723 tick 250 00:07:30.723 tick 100 00:07:30.723 tick 100 00:07:30.723 test_end 00:07:30.723 00:07:30.723 real 0m1.841s 00:07:30.723 user 0m1.590s 00:07:30.723 sys 0m0.141s 00:07:30.723 13:41:40 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.723 13:41:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:30.723 ************************************ 00:07:30.723 END TEST event_reactor 00:07:30.723 ************************************ 00:07:30.723 13:41:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:30.723 13:41:40 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:30.723 13:41:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.723 13:41:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.723 ************************************ 00:07:30.723 START TEST event_reactor_perf 00:07:30.723 ************************************ 00:07:30.723 13:41:40 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:30.981 [2024-10-01 13:41:40.943783] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:30.982 [2024-10-01 13:41:40.943886] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57986 ] 00:07:30.982 [2024-10-01 13:41:41.113243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.240 [2024-10-01 13:41:41.323684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.614 test_start 00:07:32.614 test_end 00:07:32.614 Performance: 376806 events per second 00:07:32.614 00:07:32.614 real 0m1.822s 00:07:32.614 user 0m1.592s 00:07:32.614 sys 0m0.119s 00:07:32.614 13:41:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.614 13:41:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.614 ************************************ 00:07:32.614 END TEST event_reactor_perf 00:07:32.614 ************************************ 00:07:32.614 13:41:42 event -- event/event.sh@49 -- # uname -s 00:07:32.614 13:41:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:32.614 13:41:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:32.614 13:41:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.614 13:41:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.614 13:41:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.614 ************************************ 00:07:32.614 START TEST event_scheduler 00:07:32.614 ************************************ 00:07:32.614 13:41:42 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:32.917 * Looking for test storage... 00:07:32.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:32.917 13:41:42 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:32.917 13:41:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:32.918 13:41:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:32.918 13:41:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.918 13:41:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.918 13:41:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:32.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.918 --rc genhtml_branch_coverage=1 00:07:32.918 --rc genhtml_function_coverage=1 00:07:32.918 --rc genhtml_legend=1 00:07:32.918 --rc geninfo_all_blocks=1 00:07:32.918 --rc geninfo_unexecuted_blocks=1 00:07:32.918 00:07:32.918 ' 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:32.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.918 --rc genhtml_branch_coverage=1 00:07:32.918 --rc genhtml_function_coverage=1 00:07:32.918 --rc genhtml_legend=1 00:07:32.918 --rc geninfo_all_blocks=1 00:07:32.918 --rc geninfo_unexecuted_blocks=1 00:07:32.918 00:07:32.918 ' 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:32.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.918 --rc genhtml_branch_coverage=1 00:07:32.918 --rc genhtml_function_coverage=1 00:07:32.918 --rc genhtml_legend=1 00:07:32.918 --rc geninfo_all_blocks=1 00:07:32.918 --rc geninfo_unexecuted_blocks=1 00:07:32.918 00:07:32.918 ' 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:32.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.918 --rc genhtml_branch_coverage=1 00:07:32.918 --rc genhtml_function_coverage=1 00:07:32.918 --rc genhtml_legend=1 00:07:32.918 --rc geninfo_all_blocks=1 00:07:32.918 --rc geninfo_unexecuted_blocks=1 00:07:32.918 00:07:32.918 ' 00:07:32.918 13:41:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:32.918 13:41:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58062 00:07:32.918 13:41:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:32.918 13:41:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.918 13:41:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58062 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58062 ']' 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.918 13:41:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 [2024-10-01 13:41:43.112993] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:33.186 [2024-10-01 13:41:43.113126] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58062 ] 00:07:33.186 [2024-10-01 13:41:43.287380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.445 [2024-10-01 13:41:43.506115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.445 [2024-10-01 13:41:43.506290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.445 [2024-10-01 13:41:43.506484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.445 [2024-10-01 13:41:43.506567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.012 13:41:43 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.012 13:41:43 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:34.012 13:41:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:34.012 13:41:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.012 13:41:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.012 POWER: Cannot set governor of lcore 0 to userspace 00:07:34.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.012 POWER: Cannot set governor of lcore 0 to performance 00:07:34.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.012 POWER: Cannot set governor of lcore 0 to userspace 00:07:34.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.012 POWER: Cannot set governor of lcore 0 to userspace 00:07:34.012 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:34.012 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:34.012 POWER: Unable to set Power Management Environment for lcore 0 00:07:34.012 [2024-10-01 13:41:43.984105] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:34.012 [2024-10-01 13:41:43.984127] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:34.012 [2024-10-01 13:41:43.984140] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:34.012 [2024-10-01 13:41:43.984163] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:34.012 [2024-10-01 13:41:43.984174] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:34.012 [2024-10-01 13:41:43.984186] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:34.012 13:41:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.012 13:41:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:34.012 13:41:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.012 13:41:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 [2024-10-01 13:41:44.309857] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:34.271 13:41:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.271 13:41:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:34.271 13:41:44 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.271 13:41:44 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.271 13:41:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 ************************************ 00:07:34.271 START TEST scheduler_create_thread 00:07:34.271 ************************************ 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 2 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 3 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 4 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 5 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 6 00:07:34.271 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.272 7 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.272 8 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.272 9 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.272 10 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.272 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.839 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.839 13:41:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:34.839 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.839 13:41:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.214 13:41:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.214 13:41:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:36.214 13:41:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:36.214 13:41:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.214 13:41:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.590 13:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.590 00:07:37.590 real 0m3.101s 00:07:37.590 user 0m0.027s 00:07:37.590 sys 0m0.007s 00:07:37.590 ************************************ 00:07:37.590 END TEST scheduler_create_thread 00:07:37.590 ************************************ 00:07:37.590 13:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.590 13:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.590 13:41:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:37.590 13:41:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58062 00:07:37.590 13:41:47 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58062 ']' 00:07:37.590 13:41:47 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58062 00:07:37.590 13:41:47 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:37.590 13:41:47 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.590 13:41:47 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58062 00:07:37.590 killing process with pid 58062 00:07:37.590 13:41:47 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:37.591 13:41:47 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:37.591 13:41:47 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58062' 00:07:37.591 13:41:47 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58062 00:07:37.591 13:41:47 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58062 00:07:37.849 [2024-10-01 13:41:47.806833] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:39.236 00:07:39.236 real 0m6.404s 00:07:39.236 user 0m12.420s 00:07:39.236 sys 0m0.546s 00:07:39.236 ************************************ 00:07:39.236 END TEST event_scheduler 00:07:39.236 ************************************ 00:07:39.236 13:41:49 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.236 13:41:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 13:41:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:39.236 13:41:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:39.236 13:41:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.236 13:41:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.236 13:41:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 ************************************ 00:07:39.236 START TEST app_repeat 00:07:39.236 ************************************ 00:07:39.236 13:41:49 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58179 00:07:39.236 Process app_repeat pid: 58179 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58179' 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.236 spdk_app_start Round 0 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:39.236 13:41:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58179 /var/tmp/spdk-nbd.sock 00:07:39.236 13:41:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58179 ']' 00:07:39.236 13:41:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.236 13:41:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.236 13:41:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.236 13:41:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.236 13:41:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.237 [2024-10-01 13:41:49.349358] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:39.237 [2024-10-01 13:41:49.349850] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58179 ] 00:07:39.497 [2024-10-01 13:41:49.529954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.755 [2024-10-01 13:41:49.777779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.755 [2024-10-01 13:41:49.777806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.347 13:41:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.347 13:41:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:40.347 13:41:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.605 Malloc0 00:07:40.605 13:41:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.864 Malloc1 00:07:40.864 13:41:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.864 13:41:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.864 13:41:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.864 13:41:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.865 13:41:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:41.123 /dev/nbd0 00:07:41.123 13:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.123 13:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.123 1+0 records in 00:07:41.123 1+0 records out 00:07:41.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283478 s, 14.4 MB/s 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.123 13:41:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:41.123 13:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.123 13:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.123 13:41:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:41.382 /dev/nbd1 00:07:41.382 13:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:41.382 13:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.382 1+0 records in 00:07:41.382 1+0 records out 00:07:41.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205042 s, 20.0 MB/s 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.382 13:41:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:41.382 13:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.382 13:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.382 13:41:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.382 13:41:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.382 13:41:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd0", 00:07:41.640 "bdev_name": "Malloc0" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd1", 00:07:41.640 "bdev_name": "Malloc1" 00:07:41.640 } 00:07:41.640 ]' 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd0", 00:07:41.640 "bdev_name": "Malloc0" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd1", 00:07:41.640 "bdev_name": "Malloc1" 00:07:41.640 } 00:07:41.640 ]' 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:41.640 /dev/nbd1' 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:41.640 /dev/nbd1' 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:41.640 13:41:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:41.899 256+0 records in 00:07:41.899 256+0 records out 00:07:41.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011253 s, 93.2 MB/s 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.899 256+0 records in 00:07:41.899 256+0 records out 00:07:41.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280438 s, 37.4 MB/s 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:41.899 256+0 records in 00:07:41.899 256+0 records out 00:07:41.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335223 s, 31.3 MB/s 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.899 13:41:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.158 13:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.159 13:41:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.418 13:41:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:42.677 13:41:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:42.677 13:41:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:42.968 13:41:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:44.868 [2024-10-01 13:41:54.543277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:44.868 [2024-10-01 13:41:54.765500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.868 [2024-10-01 13:41:54.765501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.868 [2024-10-01 13:41:54.974305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:44.868 [2024-10-01 13:41:54.974433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:46.242 13:41:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:46.242 spdk_app_start Round 1 00:07:46.242 13:41:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:46.242 13:41:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58179 /var/tmp/spdk-nbd.sock 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58179 ']' 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.242 13:41:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:46.242 13:41:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:46.809 Malloc0 00:07:46.809 13:41:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.068 Malloc1 00:07:47.068 13:41:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.068 13:41:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:47.326 /dev/nbd0 00:07:47.327 13:41:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.327 13:41:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.327 1+0 records in 00:07:47.327 1+0 records out 00:07:47.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346425 s, 11.8 MB/s 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.327 13:41:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:47.327 13:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.327 13:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.327 13:41:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:47.585 /dev/nbd1 00:07:47.585 13:41:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.585 13:41:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.585 1+0 records in 00:07:47.585 1+0 records out 00:07:47.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278126 s, 14.7 MB/s 00:07:47.585 13:41:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.586 13:41:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:47.586 13:41:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.586 13:41:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.586 13:41:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:47.586 13:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.586 13:41:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.586 13:41:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.586 13:41:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.586 13:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:47.844 { 00:07:47.844 "nbd_device": "/dev/nbd0", 00:07:47.844 "bdev_name": "Malloc0" 00:07:47.844 }, 00:07:47.844 { 00:07:47.844 "nbd_device": "/dev/nbd1", 00:07:47.844 "bdev_name": "Malloc1" 00:07:47.844 } 00:07:47.844 ]' 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:47.844 { 00:07:47.844 "nbd_device": "/dev/nbd0", 00:07:47.844 "bdev_name": "Malloc0" 00:07:47.844 }, 00:07:47.844 { 00:07:47.844 "nbd_device": "/dev/nbd1", 00:07:47.844 "bdev_name": "Malloc1" 00:07:47.844 } 00:07:47.844 ]' 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:47.844 /dev/nbd1' 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:47.844 /dev/nbd1' 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:47.844 13:41:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:47.845 13:41:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:47.845 13:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.845 13:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.845 13:41:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:47.845 13:41:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:47.845 13:41:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:47.845 13:41:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:47.845 256+0 records in 00:07:47.845 256+0 records out 00:07:47.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012942 s, 81.0 MB/s 00:07:47.845 13:41:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.845 13:41:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:48.103 256+0 records in 00:07:48.103 256+0 records out 00:07:48.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02808 s, 37.3 MB/s 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:48.103 256+0 records in 00:07:48.103 256+0 records out 00:07:48.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329213 s, 31.9 MB/s 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.103 13:41:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.363 13:41:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.622 13:41:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:48.881 13:41:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:48.881 13:41:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:49.450 13:41:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:50.856 [2024-10-01 13:42:00.660025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.856 [2024-10-01 13:42:00.866091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.856 [2024-10-01 13:42:00.866114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.115 [2024-10-01 13:42:01.070760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:51.115 [2024-10-01 13:42:01.070860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:52.488 spdk_app_start Round 2 00:07:52.488 13:42:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:52.488 13:42:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:52.488 13:42:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58179 /var/tmp/spdk-nbd.sock 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58179 ']' 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.488 13:42:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:52.488 13:42:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:52.747 Malloc0 00:07:52.747 13:42:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.005 Malloc1 00:07:53.264 13:42:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.264 13:42:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:53.264 /dev/nbd0 00:07:53.532 13:42:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:53.532 13:42:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.532 1+0 records in 00:07:53.532 1+0 records out 00:07:53.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381754 s, 10.7 MB/s 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.532 13:42:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:53.532 13:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.532 13:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.532 13:42:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:53.532 /dev/nbd1 00:07:53.532 13:42:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:53.791 13:42:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.791 1+0 records in 00:07:53.791 1+0 records out 00:07:53.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263748 s, 15.5 MB/s 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.791 13:42:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:53.791 13:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.791 13:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.791 13:42:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.791 13:42:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.791 13:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.051 13:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.051 { 00:07:54.051 "nbd_device": "/dev/nbd0", 00:07:54.051 "bdev_name": "Malloc0" 00:07:54.051 }, 00:07:54.051 { 00:07:54.051 "nbd_device": "/dev/nbd1", 00:07:54.051 "bdev_name": "Malloc1" 00:07:54.051 } 00:07:54.051 ]' 00:07:54.051 13:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.051 { 00:07:54.051 "nbd_device": "/dev/nbd0", 00:07:54.051 "bdev_name": "Malloc0" 00:07:54.051 }, 00:07:54.051 { 00:07:54.051 "nbd_device": "/dev/nbd1", 00:07:54.051 "bdev_name": "Malloc1" 00:07:54.051 } 00:07:54.051 ]' 00:07:54.051 13:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:54.051 /dev/nbd1' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:54.051 /dev/nbd1' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:54.051 256+0 records in 00:07:54.051 256+0 records out 00:07:54.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138389 s, 75.8 MB/s 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:54.051 256+0 records in 00:07:54.051 256+0 records out 00:07:54.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030115 s, 34.8 MB/s 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:54.051 256+0 records in 00:07:54.051 256+0 records out 00:07:54.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347227 s, 30.2 MB/s 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.051 13:42:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.310 13:42:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.570 13:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:54.828 13:42:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:54.828 13:42:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:55.395 13:42:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:56.770 [2024-10-01 13:42:06.792240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.028 [2024-10-01 13:42:07.007207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.028 [2024-10-01 13:42:07.007212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.028 [2024-10-01 13:42:07.209210] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:57.028 [2024-10-01 13:42:07.209286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:58.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:58.404 13:42:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58179 /var/tmp/spdk-nbd.sock 00:07:58.404 13:42:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58179 ']' 00:07:58.404 13:42:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:58.404 13:42:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.404 13:42:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:58.404 13:42:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.404 13:42:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:58.662 13:42:08 event.app_repeat -- event/event.sh@39 -- # killprocess 58179 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58179 ']' 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58179 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58179 00:07:58.662 killing process with pid 58179 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58179' 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58179 00:07:58.662 13:42:08 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58179 00:08:00.042 spdk_app_start is called in Round 0. 00:08:00.042 Shutdown signal received, stop current app iteration 00:08:00.042 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:08:00.042 spdk_app_start is called in Round 1. 00:08:00.042 Shutdown signal received, stop current app iteration 00:08:00.042 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:08:00.042 spdk_app_start is called in Round 2. 00:08:00.042 Shutdown signal received, stop current app iteration 00:08:00.042 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:08:00.042 spdk_app_start is called in Round 3. 00:08:00.042 Shutdown signal received, stop current app iteration 00:08:00.042 13:42:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:00.042 13:42:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:00.042 00:08:00.042 real 0m20.627s 00:08:00.042 user 0m43.141s 00:08:00.042 sys 0m3.400s 00:08:00.042 13:42:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.042 13:42:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.042 ************************************ 00:08:00.042 END TEST app_repeat 00:08:00.042 ************************************ 00:08:00.042 13:42:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:00.042 13:42:09 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:00.042 13:42:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.042 13:42:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.042 13:42:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.042 ************************************ 00:08:00.042 START TEST cpu_locks 00:08:00.042 ************************************ 00:08:00.042 13:42:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:00.042 * Looking for test storage... 00:08:00.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.042 13:42:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:00.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.042 --rc genhtml_branch_coverage=1 00:08:00.042 --rc genhtml_function_coverage=1 00:08:00.042 --rc genhtml_legend=1 00:08:00.042 --rc geninfo_all_blocks=1 00:08:00.042 --rc geninfo_unexecuted_blocks=1 00:08:00.042 00:08:00.042 ' 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:00.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.042 --rc genhtml_branch_coverage=1 00:08:00.042 --rc genhtml_function_coverage=1 00:08:00.042 --rc genhtml_legend=1 00:08:00.042 --rc geninfo_all_blocks=1 00:08:00.042 --rc geninfo_unexecuted_blocks=1 00:08:00.042 00:08:00.042 ' 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:00.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.042 --rc genhtml_branch_coverage=1 00:08:00.042 --rc genhtml_function_coverage=1 00:08:00.042 --rc genhtml_legend=1 00:08:00.042 --rc geninfo_all_blocks=1 00:08:00.042 --rc geninfo_unexecuted_blocks=1 00:08:00.042 00:08:00.042 ' 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:00.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.042 --rc genhtml_branch_coverage=1 00:08:00.042 --rc genhtml_function_coverage=1 00:08:00.042 --rc genhtml_legend=1 00:08:00.042 --rc geninfo_all_blocks=1 00:08:00.042 --rc geninfo_unexecuted_blocks=1 00:08:00.042 00:08:00.042 ' 00:08:00.042 13:42:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:00.042 13:42:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:00.042 13:42:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:00.042 13:42:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.042 13:42:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.302 ************************************ 00:08:00.302 START TEST default_locks 00:08:00.302 ************************************ 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58642 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58642 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58642 ']' 00:08:00.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.302 13:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.302 [2024-10-01 13:42:10.341428] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:00.303 [2024-10-01 13:42:10.341556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:08:00.561 [2024-10-01 13:42:10.513817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.561 [2024-10-01 13:42:10.724327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.496 13:42:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.496 13:42:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:01.496 13:42:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58642 00:08:01.496 13:42:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58642 00:08:01.496 13:42:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58642 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58642 ']' 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58642 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58642 00:08:02.064 killing process with pid 58642 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58642' 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58642 00:08:02.064 13:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58642 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58642 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58642 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58642 00:08:05.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.348 ERROR: process (pid: 58642) is no longer running 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58642 ']' 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58642) - No such process 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:05.348 00:08:05.348 real 0m4.607s 00:08:05.348 user 0m4.543s 00:08:05.348 sys 0m0.755s 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.348 13:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.348 ************************************ 00:08:05.349 END TEST default_locks 00:08:05.349 ************************************ 00:08:05.349 13:42:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:05.349 13:42:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.349 13:42:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.349 13:42:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.349 ************************************ 00:08:05.349 START TEST default_locks_via_rpc 00:08:05.349 ************************************ 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58718 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58718 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58718 ']' 00:08:05.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.349 13:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.349 [2024-10-01 13:42:14.996420] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:05.349 [2024-10-01 13:42:14.996599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58718 ] 00:08:05.349 [2024-10-01 13:42:15.170686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.349 [2024-10-01 13:42:15.440599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58718 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58718 00:08:06.281 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58718 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58718 ']' 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58718 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58718 00:08:06.859 killing process with pid 58718 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58718' 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58718 00:08:06.859 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58718 00:08:09.387 ************************************ 00:08:09.387 END TEST default_locks_via_rpc 00:08:09.387 ************************************ 00:08:09.387 00:08:09.387 real 0m4.554s 00:08:09.387 user 0m4.669s 00:08:09.387 sys 0m0.716s 00:08:09.387 13:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.387 13:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.387 13:42:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:09.387 13:42:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.387 13:42:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.387 13:42:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.387 ************************************ 00:08:09.387 START TEST non_locking_app_on_locked_coremask 00:08:09.387 ************************************ 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58805 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58805 /var/tmp/spdk.sock 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58805 ']' 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.387 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.645 [2024-10-01 13:42:19.608722] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:09.645 [2024-10-01 13:42:19.609056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58805 ] 00:08:09.645 [2024-10-01 13:42:19.783012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.903 [2024-10-01 13:42:20.002216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58821 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58821 /var/tmp/spdk2.sock 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58821 ']' 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.838 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.838 [2024-10-01 13:42:21.003690] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:10.838 [2024-10-01 13:42:21.003833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58821 ] 00:08:11.097 [2024-10-01 13:42:21.171129] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:11.097 [2024-10-01 13:42:21.171184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.664 [2024-10-01 13:42:21.603656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.585 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.585 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:13.585 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58805 00:08:13.585 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58805 00:08:13.585 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58805 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58805 ']' 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58805 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58805 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58805' 00:08:14.523 killing process with pid 58805 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58805 00:08:14.523 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58805 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58821 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58821 ']' 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58821 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58821 00:08:19.848 killing process with pid 58821 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58821' 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58821 00:08:19.848 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58821 00:08:22.384 00:08:22.384 real 0m12.702s 00:08:22.384 user 0m13.029s 00:08:22.384 sys 0m1.500s 00:08:22.384 ************************************ 00:08:22.384 END TEST non_locking_app_on_locked_coremask 00:08:22.384 13:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.384 13:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 ************************************ 00:08:22.384 13:42:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:22.384 13:42:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.384 13:42:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.384 13:42:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 ************************************ 00:08:22.384 START TEST locking_app_on_unlocked_coremask 00:08:22.384 ************************************ 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58985 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58985 /var/tmp/spdk.sock 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58985 ']' 00:08:22.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.384 13:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [2024-10-01 13:42:32.399818] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:22.384 [2024-10-01 13:42:32.400170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:08:22.384 [2024-10-01 13:42:32.571761] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.384 [2024-10-01 13:42:32.571830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.644 [2024-10-01 13:42:32.791225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.584 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59001 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59001 /var/tmp/spdk2.sock 00:08:23.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59001 ']' 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.585 13:42:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.843 [2024-10-01 13:42:33.775837] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:23.843 [2024-10-01 13:42:33.775969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:08:23.843 [2024-10-01 13:42:33.944121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.411 [2024-10-01 13:42:34.362214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.343 13:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.343 13:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:26.343 13:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59001 00:08:26.343 13:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59001 00:08:26.343 13:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58985 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58985 ']' 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58985 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58985 00:08:27.279 killing process with pid 58985 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58985' 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58985 00:08:27.279 13:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58985 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59001 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59001 ']' 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59001 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59001 00:08:32.551 killing process with pid 59001 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59001' 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59001 00:08:32.551 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59001 00:08:35.085 00:08:35.085 real 0m12.753s 00:08:35.085 user 0m13.116s 00:08:35.085 sys 0m1.484s 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.085 ************************************ 00:08:35.085 END TEST locking_app_on_unlocked_coremask 00:08:35.085 ************************************ 00:08:35.085 13:42:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:35.085 13:42:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.085 13:42:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.085 13:42:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.085 ************************************ 00:08:35.085 START TEST locking_app_on_locked_coremask 00:08:35.085 ************************************ 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59160 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59160 /var/tmp/spdk.sock 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59160 ']' 00:08:35.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.085 13:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.085 [2024-10-01 13:42:45.213526] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:35.085 [2024-10-01 13:42:45.213657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59160 ] 00:08:35.344 [2024-10-01 13:42:45.388958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.603 [2024-10-01 13:42:45.606602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59182 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59182 /var/tmp/spdk2.sock 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59182 /var/tmp/spdk2.sock 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59182 /var/tmp/spdk2.sock 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59182 ']' 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.540 13:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:36.540 [2024-10-01 13:42:46.637739] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:36.540 [2024-10-01 13:42:46.637875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:08:36.798 [2024-10-01 13:42:46.805106] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59160 has claimed it. 00:08:36.798 [2024-10-01 13:42:46.805169] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:37.089 ERROR: process (pid: 59182) is no longer running 00:08:37.089 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59182) - No such process 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59160 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59160 00:08:37.089 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:37.656 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59160 00:08:37.656 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59160 ']' 00:08:37.656 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59160 00:08:37.656 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:37.656 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.656 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59160 00:08:37.916 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.916 killing process with pid 59160 00:08:37.916 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.916 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59160' 00:08:37.916 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59160 00:08:37.916 13:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59160 00:08:40.451 ************************************ 00:08:40.451 END TEST locking_app_on_locked_coremask 00:08:40.451 ************************************ 00:08:40.451 00:08:40.451 real 0m5.315s 00:08:40.451 user 0m5.520s 00:08:40.451 sys 0m0.900s 00:08:40.451 13:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.451 13:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.451 13:42:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:40.451 13:42:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.451 13:42:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.451 13:42:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.451 ************************************ 00:08:40.451 START TEST locking_overlapped_coremask 00:08:40.451 ************************************ 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59257 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59257 /var/tmp/spdk.sock 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59257 ']' 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.451 13:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.451 [2024-10-01 13:42:50.598785] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:40.452 [2024-10-01 13:42:50.599115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:08:40.711 [2024-10-01 13:42:50.769108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.968 [2024-10-01 13:42:50.987921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.968 [2024-10-01 13:42:50.988015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.968 [2024-10-01 13:42:50.988048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59275 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59275 /var/tmp/spdk2.sock 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59275 /var/tmp/spdk2.sock 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59275 /var/tmp/spdk2.sock 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59275 ']' 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:41.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.901 13:42:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:41.902 [2024-10-01 13:42:51.964881] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:41.902 [2024-10-01 13:42:51.965025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59275 ] 00:08:42.159 [2024-10-01 13:42:52.133158] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59257 has claimed it. 00:08:42.160 [2024-10-01 13:42:52.133241] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:42.748 ERROR: process (pid: 59275) is no longer running 00:08:42.748 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59275) - No such process 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59257 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59257 ']' 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59257 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59257 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59257' 00:08:42.748 killing process with pid 59257 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59257 00:08:42.748 13:42:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59257 00:08:45.294 ************************************ 00:08:45.294 END TEST locking_overlapped_coremask 00:08:45.294 ************************************ 00:08:45.294 00:08:45.294 real 0m4.758s 00:08:45.294 user 0m12.471s 00:08:45.294 sys 0m0.684s 00:08:45.294 13:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.294 13:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 13:42:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:45.294 13:42:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.294 13:42:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.294 13:42:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 ************************************ 00:08:45.294 START TEST locking_overlapped_coremask_via_rpc 00:08:45.294 ************************************ 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59344 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59344 /var/tmp/spdk.sock 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59344 ']' 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.295 13:42:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.295 [2024-10-01 13:42:55.425163] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:45.295 [2024-10-01 13:42:55.425543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59344 ] 00:08:45.554 [2024-10-01 13:42:55.597572] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:45.554 [2024-10-01 13:42:55.597807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.811 [2024-10-01 13:42:55.817119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.811 [2024-10-01 13:42:55.817211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.811 [2024-10-01 13:42:55.817246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59368 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59368 /var/tmp/spdk2.sock 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59368 ']' 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:46.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.742 13:42:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.742 [2024-10-01 13:42:56.835289] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:46.743 [2024-10-01 13:42:56.835700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:08:47.000 [2024-10-01 13:42:57.004917] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:47.000 [2024-10-01 13:42:57.004993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:47.258 [2024-10-01 13:42:57.448771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.258 [2024-10-01 13:42:57.448871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.515 [2024-10-01 13:42:57.448901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.439 [2024-10-01 13:42:59.469638] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59344 has claimed it. 00:08:49.439 request: 00:08:49.439 { 00:08:49.439 "method": "framework_enable_cpumask_locks", 00:08:49.439 "req_id": 1 00:08:49.439 } 00:08:49.439 Got JSON-RPC error response 00:08:49.439 response: 00:08:49.439 { 00:08:49.439 "code": -32603, 00:08:49.439 "message": "Failed to claim CPU core: 2" 00:08:49.439 } 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59344 /var/tmp/spdk.sock 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59344 ']' 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.439 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:49.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59368 /var/tmp/spdk2.sock 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59368 ']' 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.697 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:49.954 00:08:49.954 real 0m4.661s 00:08:49.954 user 0m1.367s 00:08:49.954 sys 0m0.252s 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.954 13:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.954 ************************************ 00:08:49.954 END TEST locking_overlapped_coremask_via_rpc 00:08:49.954 ************************************ 00:08:49.954 13:43:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:49.954 13:43:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59344 ]] 00:08:49.954 13:43:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59344 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59344 ']' 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59344 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59344 00:08:49.954 killing process with pid 59344 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59344' 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59344 00:08:49.954 13:43:00 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59344 00:08:53.237 13:43:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59368 ]] 00:08:53.237 13:43:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59368 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59368 ']' 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59368 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59368 00:08:53.237 killing process with pid 59368 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59368' 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59368 00:08:53.237 13:43:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59368 00:08:55.769 13:43:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:55.769 13:43:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:55.769 13:43:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59344 ]] 00:08:55.769 13:43:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59344 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59344 ']' 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59344 00:08:55.769 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59344) - No such process 00:08:55.769 Process with pid 59344 is not found 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59344 is not found' 00:08:55.769 Process with pid 59368 is not found 00:08:55.769 13:43:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59368 ]] 00:08:55.769 13:43:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59368 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59368 ']' 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59368 00:08:55.769 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59368) - No such process 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59368 is not found' 00:08:55.769 13:43:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:55.769 00:08:55.769 real 0m55.697s 00:08:55.769 user 1m33.325s 00:08:55.769 sys 0m7.636s 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.769 13:43:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.769 ************************************ 00:08:55.769 END TEST cpu_locks 00:08:55.769 ************************************ 00:08:55.769 ************************************ 00:08:55.769 END TEST event 00:08:55.769 ************************************ 00:08:55.769 00:08:55.769 real 1m28.892s 00:08:55.769 user 2m36.911s 00:08:55.769 sys 0m12.380s 00:08:55.769 13:43:05 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.769 13:43:05 event -- common/autotest_common.sh@10 -- # set +x 00:08:55.769 13:43:05 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:55.769 13:43:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.769 13:43:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.769 13:43:05 -- common/autotest_common.sh@10 -- # set +x 00:08:55.769 ************************************ 00:08:55.769 START TEST thread 00:08:55.769 ************************************ 00:08:55.769 13:43:05 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:55.769 * Looking for test storage... 00:08:55.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:55.769 13:43:05 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:55.769 13:43:05 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:55.769 13:43:05 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:08:56.027 13:43:05 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:56.027 13:43:05 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.027 13:43:05 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.027 13:43:05 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.027 13:43:05 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.027 13:43:05 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.027 13:43:05 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.027 13:43:05 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.027 13:43:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.027 13:43:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.027 13:43:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.027 13:43:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.027 13:43:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:56.027 13:43:06 thread -- scripts/common.sh@345 -- # : 1 00:08:56.027 13:43:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.027 13:43:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.027 13:43:06 thread -- scripts/common.sh@365 -- # decimal 1 00:08:56.027 13:43:06 thread -- scripts/common.sh@353 -- # local d=1 00:08:56.027 13:43:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.027 13:43:06 thread -- scripts/common.sh@355 -- # echo 1 00:08:56.027 13:43:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.027 13:43:06 thread -- scripts/common.sh@366 -- # decimal 2 00:08:56.027 13:43:06 thread -- scripts/common.sh@353 -- # local d=2 00:08:56.027 13:43:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.027 13:43:06 thread -- scripts/common.sh@355 -- # echo 2 00:08:56.027 13:43:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.027 13:43:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.027 13:43:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.027 13:43:06 thread -- scripts/common.sh@368 -- # return 0 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:56.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.027 --rc genhtml_branch_coverage=1 00:08:56.027 --rc genhtml_function_coverage=1 00:08:56.027 --rc genhtml_legend=1 00:08:56.027 --rc geninfo_all_blocks=1 00:08:56.027 --rc geninfo_unexecuted_blocks=1 00:08:56.027 00:08:56.027 ' 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:56.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.027 --rc genhtml_branch_coverage=1 00:08:56.027 --rc genhtml_function_coverage=1 00:08:56.027 --rc genhtml_legend=1 00:08:56.027 --rc geninfo_all_blocks=1 00:08:56.027 --rc geninfo_unexecuted_blocks=1 00:08:56.027 00:08:56.027 ' 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:56.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.027 --rc genhtml_branch_coverage=1 00:08:56.027 --rc genhtml_function_coverage=1 00:08:56.027 --rc genhtml_legend=1 00:08:56.027 --rc geninfo_all_blocks=1 00:08:56.027 --rc geninfo_unexecuted_blocks=1 00:08:56.027 00:08:56.027 ' 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:56.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.027 --rc genhtml_branch_coverage=1 00:08:56.027 --rc genhtml_function_coverage=1 00:08:56.027 --rc genhtml_legend=1 00:08:56.027 --rc geninfo_all_blocks=1 00:08:56.027 --rc geninfo_unexecuted_blocks=1 00:08:56.027 00:08:56.027 ' 00:08:56.027 13:43:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.027 13:43:06 thread -- common/autotest_common.sh@10 -- # set +x 00:08:56.027 ************************************ 00:08:56.027 START TEST thread_poller_perf 00:08:56.027 ************************************ 00:08:56.027 13:43:06 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:56.027 [2024-10-01 13:43:06.100007] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:56.028 [2024-10-01 13:43:06.100142] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59574 ] 00:08:56.286 [2024-10-01 13:43:06.267796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.543 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:56.543 [2024-10-01 13:43:06.497507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.919 ====================================== 00:08:57.919 busy:2498101402 (cyc) 00:08:57.919 total_run_count: 376000 00:08:57.919 tsc_hz: 2490000000 (cyc) 00:08:57.919 ====================================== 00:08:57.919 poller_cost: 6643 (cyc), 2667 (nsec) 00:08:57.919 00:08:57.919 real 0m1.863s 00:08:57.919 user 0m1.634s 00:08:57.919 sys 0m0.120s 00:08:57.919 13:43:07 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.919 ************************************ 00:08:57.919 END TEST thread_poller_perf 00:08:57.919 ************************************ 00:08:57.919 13:43:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:57.919 13:43:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:57.919 13:43:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:57.919 13:43:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.919 13:43:07 thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.919 ************************************ 00:08:57.919 START TEST thread_poller_perf 00:08:57.919 ************************************ 00:08:57.919 13:43:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:57.919 [2024-10-01 13:43:08.029023] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:57.919 [2024-10-01 13:43:08.029340] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:08:58.177 [2024-10-01 13:43:08.201729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.435 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:58.435 [2024-10-01 13:43:08.434380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.812 ====================================== 00:08:59.812 busy:2494230916 (cyc) 00:08:59.812 total_run_count: 4864000 00:08:59.812 tsc_hz: 2490000000 (cyc) 00:08:59.812 ====================================== 00:08:59.812 poller_cost: 512 (cyc), 205 (nsec) 00:08:59.812 00:08:59.812 real 0m1.867s 00:08:59.812 user 0m1.634s 00:08:59.812 sys 0m0.122s 00:08:59.812 ************************************ 00:08:59.812 END TEST thread_poller_perf 00:08:59.812 ************************************ 00:08:59.812 13:43:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.812 13:43:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.812 13:43:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:59.812 ************************************ 00:08:59.812 END TEST thread 00:08:59.812 ************************************ 00:08:59.812 00:08:59.812 real 0m4.119s 00:08:59.812 user 0m3.446s 00:08:59.812 sys 0m0.452s 00:08:59.812 13:43:09 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.812 13:43:09 thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.812 13:43:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:59.812 13:43:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:59.812 13:43:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.812 13:43:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.812 13:43:09 -- common/autotest_common.sh@10 -- # set +x 00:08:59.812 ************************************ 00:08:59.812 START TEST app_cmdline 00:08:59.812 ************************************ 00:08:59.812 13:43:09 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:00.070 * Looking for test storage... 00:09:00.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:00.070 13:43:10 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:00.070 13:43:10 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:09:00.070 13:43:10 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:00.070 13:43:10 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:00.070 13:43:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.070 13:43:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.071 13:43:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.071 --rc genhtml_branch_coverage=1 00:09:00.071 --rc genhtml_function_coverage=1 00:09:00.071 --rc genhtml_legend=1 00:09:00.071 --rc geninfo_all_blocks=1 00:09:00.071 --rc geninfo_unexecuted_blocks=1 00:09:00.071 00:09:00.071 ' 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.071 --rc genhtml_branch_coverage=1 00:09:00.071 --rc genhtml_function_coverage=1 00:09:00.071 --rc genhtml_legend=1 00:09:00.071 --rc geninfo_all_blocks=1 00:09:00.071 --rc geninfo_unexecuted_blocks=1 00:09:00.071 00:09:00.071 ' 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.071 --rc genhtml_branch_coverage=1 00:09:00.071 --rc genhtml_function_coverage=1 00:09:00.071 --rc genhtml_legend=1 00:09:00.071 --rc geninfo_all_blocks=1 00:09:00.071 --rc geninfo_unexecuted_blocks=1 00:09:00.071 00:09:00.071 ' 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.071 --rc genhtml_branch_coverage=1 00:09:00.071 --rc genhtml_function_coverage=1 00:09:00.071 --rc genhtml_legend=1 00:09:00.071 --rc geninfo_all_blocks=1 00:09:00.071 --rc geninfo_unexecuted_blocks=1 00:09:00.071 00:09:00.071 ' 00:09:00.071 13:43:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:00.071 13:43:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59705 00:09:00.071 13:43:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59705 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59705 ']' 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.071 13:43:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.071 13:43:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:00.330 [2024-10-01 13:43:10.329808] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:00.330 [2024-10-01 13:43:10.330220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:09:00.330 [2024-10-01 13:43:10.510855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.588 [2024-10-01 13:43:10.730644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.523 13:43:11 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.523 13:43:11 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:01.523 13:43:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:01.782 { 00:09:01.782 "version": "SPDK v25.01-pre git sha1 3a41ae5b3", 00:09:01.782 "fields": { 00:09:01.782 "major": 25, 00:09:01.782 "minor": 1, 00:09:01.782 "patch": 0, 00:09:01.782 "suffix": "-pre", 00:09:01.782 "commit": "3a41ae5b3" 00:09:01.782 } 00:09:01.782 } 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:01.782 13:43:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:01.782 13:43:11 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:02.040 request: 00:09:02.040 { 00:09:02.040 "method": "env_dpdk_get_mem_stats", 00:09:02.040 "req_id": 1 00:09:02.040 } 00:09:02.041 Got JSON-RPC error response 00:09:02.041 response: 00:09:02.041 { 00:09:02.041 "code": -32601, 00:09:02.041 "message": "Method not found" 00:09:02.041 } 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.041 13:43:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59705 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59705 ']' 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59705 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.041 13:43:12 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59705 00:09:02.299 killing process with pid 59705 00:09:02.299 13:43:12 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.299 13:43:12 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.299 13:43:12 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59705' 00:09:02.299 13:43:12 app_cmdline -- common/autotest_common.sh@969 -- # kill 59705 00:09:02.299 13:43:12 app_cmdline -- common/autotest_common.sh@974 -- # wait 59705 00:09:04.835 00:09:04.835 real 0m4.899s 00:09:04.835 user 0m5.169s 00:09:04.835 sys 0m0.723s 00:09:04.835 13:43:14 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.835 ************************************ 00:09:04.835 END TEST app_cmdline 00:09:04.835 ************************************ 00:09:04.835 13:43:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:04.835 13:43:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:04.835 13:43:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.835 13:43:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.835 13:43:14 -- common/autotest_common.sh@10 -- # set +x 00:09:04.835 ************************************ 00:09:04.835 START TEST version 00:09:04.835 ************************************ 00:09:04.835 13:43:14 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:05.094 * Looking for test storage... 00:09:05.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1681 -- # lcov --version 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:05.094 13:43:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.094 13:43:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.094 13:43:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.094 13:43:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.094 13:43:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.094 13:43:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.094 13:43:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.094 13:43:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.094 13:43:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.094 13:43:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.094 13:43:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.094 13:43:15 version -- scripts/common.sh@344 -- # case "$op" in 00:09:05.094 13:43:15 version -- scripts/common.sh@345 -- # : 1 00:09:05.094 13:43:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.094 13:43:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.094 13:43:15 version -- scripts/common.sh@365 -- # decimal 1 00:09:05.094 13:43:15 version -- scripts/common.sh@353 -- # local d=1 00:09:05.094 13:43:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.094 13:43:15 version -- scripts/common.sh@355 -- # echo 1 00:09:05.094 13:43:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.094 13:43:15 version -- scripts/common.sh@366 -- # decimal 2 00:09:05.094 13:43:15 version -- scripts/common.sh@353 -- # local d=2 00:09:05.094 13:43:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.094 13:43:15 version -- scripts/common.sh@355 -- # echo 2 00:09:05.094 13:43:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.094 13:43:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.094 13:43:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.094 13:43:15 version -- scripts/common.sh@368 -- # return 0 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.094 --rc genhtml_branch_coverage=1 00:09:05.094 --rc genhtml_function_coverage=1 00:09:05.094 --rc genhtml_legend=1 00:09:05.094 --rc geninfo_all_blocks=1 00:09:05.094 --rc geninfo_unexecuted_blocks=1 00:09:05.094 00:09:05.094 ' 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.094 --rc genhtml_branch_coverage=1 00:09:05.094 --rc genhtml_function_coverage=1 00:09:05.094 --rc genhtml_legend=1 00:09:05.094 --rc geninfo_all_blocks=1 00:09:05.094 --rc geninfo_unexecuted_blocks=1 00:09:05.094 00:09:05.094 ' 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.094 --rc genhtml_branch_coverage=1 00:09:05.094 --rc genhtml_function_coverage=1 00:09:05.094 --rc genhtml_legend=1 00:09:05.094 --rc geninfo_all_blocks=1 00:09:05.094 --rc geninfo_unexecuted_blocks=1 00:09:05.094 00:09:05.094 ' 00:09:05.094 13:43:15 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.094 --rc genhtml_branch_coverage=1 00:09:05.094 --rc genhtml_function_coverage=1 00:09:05.094 --rc genhtml_legend=1 00:09:05.094 --rc geninfo_all_blocks=1 00:09:05.094 --rc geninfo_unexecuted_blocks=1 00:09:05.094 00:09:05.094 ' 00:09:05.094 13:43:15 version -- app/version.sh@17 -- # get_header_version major 00:09:05.094 13:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # cut -f2 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:09:05.094 13:43:15 version -- app/version.sh@17 -- # major=25 00:09:05.094 13:43:15 version -- app/version.sh@18 -- # get_header_version minor 00:09:05.094 13:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # cut -f2 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:09:05.094 13:43:15 version -- app/version.sh@18 -- # minor=1 00:09:05.094 13:43:15 version -- app/version.sh@19 -- # get_header_version patch 00:09:05.094 13:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # cut -f2 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:09:05.094 13:43:15 version -- app/version.sh@19 -- # patch=0 00:09:05.094 13:43:15 version -- app/version.sh@20 -- # get_header_version suffix 00:09:05.094 13:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # cut -f2 00:09:05.094 13:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:09:05.094 13:43:15 version -- app/version.sh@20 -- # suffix=-pre 00:09:05.094 13:43:15 version -- app/version.sh@22 -- # version=25.1 00:09:05.094 13:43:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:05.094 13:43:15 version -- app/version.sh@28 -- # version=25.1rc0 00:09:05.094 13:43:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:05.094 13:43:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:05.354 13:43:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:05.354 13:43:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:05.354 00:09:05.354 real 0m0.331s 00:09:05.354 user 0m0.208s 00:09:05.354 sys 0m0.178s 00:09:05.354 ************************************ 00:09:05.354 END TEST version 00:09:05.354 ************************************ 00:09:05.354 13:43:15 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.354 13:43:15 version -- common/autotest_common.sh@10 -- # set +x 00:09:05.354 13:43:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:05.354 13:43:15 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:05.354 13:43:15 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:05.354 13:43:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.354 13:43:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.354 13:43:15 -- common/autotest_common.sh@10 -- # set +x 00:09:05.354 ************************************ 00:09:05.354 START TEST bdev_raid 00:09:05.354 ************************************ 00:09:05.354 13:43:15 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:05.354 * Looking for test storage... 00:09:05.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:05.354 13:43:15 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:05.354 13:43:15 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:09:05.354 13:43:15 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.613 13:43:15 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.613 --rc genhtml_branch_coverage=1 00:09:05.613 --rc genhtml_function_coverage=1 00:09:05.613 --rc genhtml_legend=1 00:09:05.613 --rc geninfo_all_blocks=1 00:09:05.613 --rc geninfo_unexecuted_blocks=1 00:09:05.613 00:09:05.613 ' 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.613 --rc genhtml_branch_coverage=1 00:09:05.613 --rc genhtml_function_coverage=1 00:09:05.613 --rc genhtml_legend=1 00:09:05.613 --rc geninfo_all_blocks=1 00:09:05.613 --rc geninfo_unexecuted_blocks=1 00:09:05.613 00:09:05.613 ' 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.613 --rc genhtml_branch_coverage=1 00:09:05.613 --rc genhtml_function_coverage=1 00:09:05.613 --rc genhtml_legend=1 00:09:05.613 --rc geninfo_all_blocks=1 00:09:05.613 --rc geninfo_unexecuted_blocks=1 00:09:05.613 00:09:05.613 ' 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.613 --rc genhtml_branch_coverage=1 00:09:05.613 --rc genhtml_function_coverage=1 00:09:05.613 --rc genhtml_legend=1 00:09:05.613 --rc geninfo_all_blocks=1 00:09:05.613 --rc geninfo_unexecuted_blocks=1 00:09:05.613 00:09:05.613 ' 00:09:05.613 13:43:15 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:05.613 13:43:15 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:05.613 13:43:15 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:05.613 13:43:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:05.613 13:43:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:05.613 13:43:15 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:05.613 13:43:15 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.613 13:43:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.613 ************************************ 00:09:05.613 START TEST raid1_resize_data_offset_test 00:09:05.613 ************************************ 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59898 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59898' 00:09:05.613 Process raid pid: 59898 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59898 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 59898 ']' 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.613 13:43:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.613 [2024-10-01 13:43:15.729977] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:05.614 [2024-10-01 13:43:15.730380] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.873 [2024-10-01 13:43:15.906856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.132 [2024-10-01 13:43:16.127368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.391 [2024-10-01 13:43:16.349132] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.391 [2024-10-01 13:43:16.349181] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.650 malloc0 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.650 malloc1 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.650 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.909 null0 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.909 [2024-10-01 13:43:16.850559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:06.909 [2024-10-01 13:43:16.852816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:06.909 [2024-10-01 13:43:16.852874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:06.909 [2024-10-01 13:43:16.853047] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:06.909 [2024-10-01 13:43:16.853061] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:06.909 [2024-10-01 13:43:16.853386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:06.909 [2024-10-01 13:43:16.853575] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:06.909 [2024-10-01 13:43:16.853590] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:06.909 [2024-10-01 13:43:16.853773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.909 [2024-10-01 13:43:16.906450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.909 13:43:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.476 malloc2 00:09:07.476 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.476 13:43:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:07.476 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.476 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.476 [2024-10-01 13:43:17.505220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:07.476 [2024-10-01 13:43:17.522800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:07.476 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.477 [2024-10-01 13:43:17.525344] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59898 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 59898 ']' 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 59898 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59898 00:09:07.477 killing process with pid 59898 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59898' 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 59898 00:09:07.477 13:43:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 59898 00:09:07.477 [2024-10-01 13:43:17.623668] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.477 [2024-10-01 13:43:17.624361] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:07.477 [2024-10-01 13:43:17.624463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.477 [2024-10-01 13:43:17.624484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:07.477 [2024-10-01 13:43:17.655883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.477 [2024-10-01 13:43:17.656243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.477 [2024-10-01 13:43:17.656268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:09.379 [2024-10-01 13:43:19.554939] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.812 13:43:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:10.812 00:09:10.812 real 0m5.273s 00:09:10.812 user 0m5.182s 00:09:10.812 sys 0m0.625s 00:09:10.812 13:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.812 ************************************ 00:09:10.812 END TEST raid1_resize_data_offset_test 00:09:10.812 ************************************ 00:09:10.812 13:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.812 13:43:20 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:10.812 13:43:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.812 13:43:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.812 13:43:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.812 ************************************ 00:09:10.812 START TEST raid0_resize_superblock_test 00:09:10.812 ************************************ 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:10.812 Process raid pid: 59987 00:09:10.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59987 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59987' 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59987 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 59987 ']' 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.812 13:43:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.069 [2024-10-01 13:43:21.075426] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:11.069 [2024-10-01 13:43:21.075791] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.069 [2024-10-01 13:43:21.251449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.327 [2024-10-01 13:43:21.482075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.584 [2024-10-01 13:43:21.705244] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.584 [2024-10-01 13:43:21.705565] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.842 13:43:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.842 13:43:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:11.842 13:43:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:11.842 13:43:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.842 13:43:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.407 malloc0 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.407 [2024-10-01 13:43:22.539220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:12.407 [2024-10-01 13:43:22.539485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.407 [2024-10-01 13:43:22.539523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:12.407 [2024-10-01 13:43:22.539541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.407 [2024-10-01 13:43:22.542469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.407 [2024-10-01 13:43:22.542655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:12.407 pt0 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.407 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 a6b416c5-eea9-43f1-804b-b3eea084b5ac 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 26ae4031-473e-49dc-9cfc-1a8df54bca35 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 4626188c-2789-4872-b40a-b3b91f337c17 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 [2024-10-01 13:43:22.672908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 26ae4031-473e-49dc-9cfc-1a8df54bca35 is claimed 00:09:12.668 [2024-10-01 13:43:22.673050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4626188c-2789-4872-b40a-b3b91f337c17 is claimed 00:09:12.668 [2024-10-01 13:43:22.673228] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:12.668 [2024-10-01 13:43:22.673252] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:12.668 [2024-10-01 13:43:22.673613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:12.668 [2024-10-01 13:43:22.673812] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:12.668 [2024-10-01 13:43:22.673824] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:12.668 [2024-10-01 13:43:22.674038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 [2024-10-01 13:43:22.772966] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 [2024-10-01 13:43:22.816944] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:12.668 [2024-10-01 13:43:22.816983] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '26ae4031-473e-49dc-9cfc-1a8df54bca35' was resized: old size 131072, new size 204800 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.668 [2024-10-01 13:43:22.828892] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:12.668 [2024-10-01 13:43:22.829097] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4626188c-2789-4872-b40a-b3b91f337c17' was resized: old size 131072, new size 204800 00:09:12.668 [2024-10-01 13:43:22.829153] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.668 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.926 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.927 [2024-10-01 13:43:22.944794] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.927 [2024-10-01 13:43:22.988520] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:12.927 [2024-10-01 13:43:22.988759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:12.927 [2024-10-01 13:43:22.988888] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.927 [2024-10-01 13:43:22.988917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:12.927 [2024-10-01 13:43:22.989045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.927 [2024-10-01 13:43:22.989083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.927 [2024-10-01 13:43:22.989098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.927 13:43:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.927 [2024-10-01 13:43:22.996380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:12.927 [2024-10-01 13:43:22.996469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.927 [2024-10-01 13:43:22.996492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:12.927 [2024-10-01 13:43:22.996507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.927 [2024-10-01 13:43:22.999088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.927 [2024-10-01 13:43:22.999290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:12.927 pt0 00:09:12.927 [2024-10-01 13:43:23.001192] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 26ae4031-473e-49dc-9cfc-1a8df54bca35 00:09:12.927 [2024-10-01 13:43:23.001272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 26ae4031-473e-49dc-9cfc-1a8df54bca35 is claimed 00:09:12.927 [2024-10-01 13:43:23.001395] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4626188c-2789-4872-b40a-b3b91f337c17 00:09:12.927 [2024-10-01 13:43:23.001417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4626188c-2789-4872-b40a-b3b91f337c17 is claimed 00:09:12.927 [2024-10-01 13:43:23.001604] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4626188c-2789-4872-b40a-b3b91f337c17 (2) smaller than existing raid bdev Raid (3) 00:09:12.927 [2024-10-01 13:43:23.001632] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 26ae4031-473e-49dc-9cfc-1a8df54bca35: File exists 00:09:12.927 [2024-10-01 13:43:23.001670] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:12.927 [2024-10-01 13:43:23.001685] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:12.927 [2024-10-01 13:43:23.001946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:12.927 [2024-10-01 13:43:23.002110] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:12.927 [2024-10-01 13:43:23.002120] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:12.927 [2024-10-01 13:43:23.002288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.927 [2024-10-01 13:43:23.025560] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59987 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 59987 ']' 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 59987 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59987 00:09:12.927 killing process with pid 59987 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59987' 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 59987 00:09:12.927 [2024-10-01 13:43:23.100547] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.927 [2024-10-01 13:43:23.100643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.927 [2024-10-01 13:43:23.100694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.927 [2024-10-01 13:43:23.100705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:12.927 13:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 59987 00:09:14.846 [2024-10-01 13:43:24.557544] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.781 13:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:15.781 00:09:15.781 real 0m4.896s 00:09:15.781 user 0m5.104s 00:09:15.781 sys 0m0.667s 00:09:15.781 13:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.781 13:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.781 ************************************ 00:09:15.781 END TEST raid0_resize_superblock_test 00:09:15.781 ************************************ 00:09:15.781 13:43:25 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:15.781 13:43:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:15.781 13:43:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.781 13:43:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.781 ************************************ 00:09:15.781 START TEST raid1_resize_superblock_test 00:09:15.781 ************************************ 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:15.781 Process raid pid: 60086 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60086 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60086' 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60086 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60086 ']' 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.781 13:43:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.040 [2024-10-01 13:43:26.049046] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:16.040 [2024-10-01 13:43:26.049190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.040 [2024-10-01 13:43:26.219162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.298 [2024-10-01 13:43:26.442238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.557 [2024-10-01 13:43:26.653262] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.557 [2024-10-01 13:43:26.653313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.824 13:43:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.824 13:43:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:16.824 13:43:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:16.824 13:43:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.824 13:43:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.424 malloc0 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.424 [2024-10-01 13:43:27.543993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:17.424 [2024-10-01 13:43:27.544099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.424 [2024-10-01 13:43:27.544140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.424 [2024-10-01 13:43:27.544173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.424 [2024-10-01 13:43:27.547339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.424 [2024-10-01 13:43:27.547408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:17.424 pt0 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.424 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.692 b65d3411-6fbf-40f6-9a35-e5a029c0905e 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.692 54067bbd-d357-4fb7-a1fa-659fc4960aca 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.692 fca37085-67db-483a-9e88-e399b8adcaac 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.692 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.692 [2024-10-01 13:43:27.678231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 54067bbd-d357-4fb7-a1fa-659fc4960aca is claimed 00:09:17.693 [2024-10-01 13:43:27.678373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev fca37085-67db-483a-9e88-e399b8adcaac is claimed 00:09:17.693 [2024-10-01 13:43:27.678553] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:17.693 [2024-10-01 13:43:27.678576] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:17.693 [2024-10-01 13:43:27.678884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:17.693 [2024-10-01 13:43:27.679121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:17.693 [2024-10-01 13:43:27.679134] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:17.693 [2024-10-01 13:43:27.679324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.693 [2024-10-01 13:43:27.786353] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.693 [2024-10-01 13:43:27.834205] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:17.693 [2024-10-01 13:43:27.834376] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '54067bbd-d357-4fb7-a1fa-659fc4960aca' was resized: old size 131072, new size 204800 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.693 [2024-10-01 13:43:27.842138] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:17.693 [2024-10-01 13:43:27.842168] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fca37085-67db-483a-9e88-e399b8adcaac' was resized: old size 131072, new size 204800 00:09:17.693 [2024-10-01 13:43:27.842208] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.693 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:17.951 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.952 [2024-10-01 13:43:27.946025] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.952 [2024-10-01 13:43:27.981760] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:17.952 [2024-10-01 13:43:27.981851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:17.952 [2024-10-01 13:43:27.981892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:17.952 [2024-10-01 13:43:27.982060] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.952 [2024-10-01 13:43:27.982264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.952 [2024-10-01 13:43:27.982332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.952 [2024-10-01 13:43:27.982352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.952 13:43:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.952 [2024-10-01 13:43:27.993710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:17.952 [2024-10-01 13:43:27.993805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.952 [2024-10-01 13:43:27.993833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:17.952 [2024-10-01 13:43:27.993848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.952 [2024-10-01 13:43:27.996529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.952 [2024-10-01 13:43:27.996584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:17.952 pt0 00:09:17.952 [2024-10-01 13:43:27.998400] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 54067bbd-d357-4fb7-a1fa-659fc4960aca 00:09:17.952 [2024-10-01 13:43:27.998496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 54067bbd-d357-4fb7-a1fa-659fc4960aca is claimed 00:09:17.952 [2024-10-01 13:43:27.998630] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fca37085-67db-483a-9e88-e399b8adcaac 00:09:17.952 [2024-10-01 13:43:27.998651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev fca37085-67db-483a-9e88-e399b8adcaac is claimed 00:09:17.952 [2024-10-01 13:43:27.998846] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fca37085-67db-483a-9e88-e399b8adcaac (2) smaller than existing raid bdev Raid (3) 00:09:17.952 [2024-10-01 13:43:27.998874] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 54067bbd-d357-4fb7-a1fa-659fc4960aca: File exists 00:09:17.952 [2024-10-01 13:43:27.998914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:17.952 [2024-10-01 13:43:27.998929] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:17.952 [2024-10-01 13:43:27.999231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:17.952 [2024-10-01 13:43:27.999409] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:17.952 [2024-10-01 13:43:27.999433] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:17.952 [2024-10-01 13:43:27.999610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.952 [2024-10-01 13:43:28.022786] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60086 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60086 ']' 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60086 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60086 00:09:17.952 killing process with pid 60086 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60086' 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60086 00:09:17.952 [2024-10-01 13:43:28.096556] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.952 [2024-10-01 13:43:28.096662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.952 13:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60086 00:09:17.952 [2024-10-01 13:43:28.096725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.952 [2024-10-01 13:43:28.096736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:19.852 [2024-10-01 13:43:29.615506] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.787 ************************************ 00:09:20.787 END TEST raid1_resize_superblock_test 00:09:20.787 ************************************ 00:09:20.787 13:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:20.787 00:09:20.787 real 0m4.985s 00:09:20.787 user 0m5.245s 00:09:20.787 sys 0m0.638s 00:09:20.787 13:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.787 13:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.044 13:43:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:21.044 13:43:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:21.044 13:43:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:21.044 13:43:31 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:21.044 13:43:31 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:21.044 13:43:31 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:21.044 13:43:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:21.044 13:43:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.044 13:43:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.044 ************************************ 00:09:21.044 START TEST raid_function_test_raid0 00:09:21.044 ************************************ 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:21.044 Process raid pid: 60194 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60194 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60194' 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60194 00:09:21.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60194 ']' 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.044 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:21.044 [2024-10-01 13:43:31.141898] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:21.044 [2024-10-01 13:43:31.142065] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.303 [2024-10-01 13:43:31.318943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.562 [2024-10-01 13:43:31.538622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.562 [2024-10-01 13:43:31.737959] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.562 [2024-10-01 13:43:31.738011] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.820 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.820 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:09:21.820 13:43:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:21.820 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.820 13:43:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:22.079 Base_1 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:22.079 Base_2 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:22.079 [2024-10-01 13:43:32.102567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:22.079 [2024-10-01 13:43:32.104789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:22.079 [2024-10-01 13:43:32.104870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:22.079 [2024-10-01 13:43:32.104885] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:22.079 [2024-10-01 13:43:32.105185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:22.079 [2024-10-01 13:43:32.105331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:22.079 [2024-10-01 13:43:32.105342] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:22.079 [2024-10-01 13:43:32.105546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:22.079 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:22.338 [2024-10-01 13:43:32.358253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:22.338 /dev/nbd0 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:22.338 1+0 records in 00:09:22.338 1+0 records out 00:09:22.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401653 s, 10.2 MB/s 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:22.338 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:22.596 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:22.596 { 00:09:22.596 "nbd_device": "/dev/nbd0", 00:09:22.596 "bdev_name": "raid" 00:09:22.596 } 00:09:22.596 ]' 00:09:22.596 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:22.596 { 00:09:22.596 "nbd_device": "/dev/nbd0", 00:09:22.596 "bdev_name": "raid" 00:09:22.596 } 00:09:22.596 ]' 00:09:22.596 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:22.596 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:22.596 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:22.596 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:22.596 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:22.597 4096+0 records in 00:09:22.597 4096+0 records out 00:09:22.597 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0363581 s, 57.7 MB/s 00:09:22.597 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:22.855 4096+0 records in 00:09:22.855 4096+0 records out 00:09:22.855 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.213178 s, 9.8 MB/s 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:22.855 128+0 records in 00:09:22.855 128+0 records out 00:09:22.855 65536 bytes (66 kB, 64 KiB) copied, 0.00138585 s, 47.3 MB/s 00:09:22.855 13:43:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:22.855 2035+0 records in 00:09:22.855 2035+0 records out 00:09:22.855 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0204926 s, 50.8 MB/s 00:09:22.855 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:23.113 456+0 records in 00:09:23.113 456+0 records out 00:09:23.113 233472 bytes (233 kB, 228 KiB) copied, 0.00486969 s, 47.9 MB/s 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.113 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.404 [2024-10-01 13:43:33.323481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:23.404 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60194 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60194 ']' 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60194 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60194 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.664 killing process with pid 60194 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60194' 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60194 00:09:23.664 [2024-10-01 13:43:33.649372] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.664 13:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60194 00:09:23.664 [2024-10-01 13:43:33.649510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.664 [2024-10-01 13:43:33.649559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.664 [2024-10-01 13:43:33.649573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:23.924 [2024-10-01 13:43:33.858525] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.303 13:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:25.303 00:09:25.303 real 0m4.095s 00:09:25.303 user 0m4.675s 00:09:25.303 sys 0m1.070s 00:09:25.303 13:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.303 13:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 ************************************ 00:09:25.303 END TEST raid_function_test_raid0 00:09:25.303 ************************************ 00:09:25.303 13:43:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:25.303 13:43:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.303 13:43:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.303 13:43:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 ************************************ 00:09:25.303 START TEST raid_function_test_concat 00:09:25.303 ************************************ 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60323 00:09:25.303 Process raid pid: 60323 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60323' 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60323 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60323 ']' 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:25.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.303 13:43:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 [2024-10-01 13:43:35.297695] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:25.303 [2024-10-01 13:43:35.297833] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.303 [2024-10-01 13:43:35.473319] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.562 [2024-10-01 13:43:35.692777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.822 [2024-10-01 13:43:35.905223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.822 [2024-10-01 13:43:35.905273] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:26.082 Base_1 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:26.082 Base_2 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:26.082 [2024-10-01 13:43:36.253738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:26.082 [2024-10-01 13:43:36.255928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:26.082 [2024-10-01 13:43:36.256024] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:26.082 [2024-10-01 13:43:36.256039] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:26.082 [2024-10-01 13:43:36.256349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:26.082 [2024-10-01 13:43:36.256516] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:26.082 [2024-10-01 13:43:36.256528] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:26.082 [2024-10-01 13:43:36.256712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.082 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:26.342 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:26.342 [2024-10-01 13:43:36.505622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:26.342 /dev/nbd0 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.616 1+0 records in 00:09:26.616 1+0 records out 00:09:26.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434652 s, 9.4 MB/s 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:26.616 { 00:09:26.616 "nbd_device": "/dev/nbd0", 00:09:26.616 "bdev_name": "raid" 00:09:26.616 } 00:09:26.616 ]' 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.616 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:26.616 { 00:09:26.616 "nbd_device": "/dev/nbd0", 00:09:26.616 "bdev_name": "raid" 00:09:26.616 } 00:09:26.616 ]' 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:26.876 4096+0 records in 00:09:26.876 4096+0 records out 00:09:26.876 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0383579 s, 54.7 MB/s 00:09:26.876 13:43:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:27.135 4096+0 records in 00:09:27.135 4096+0 records out 00:09:27.135 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.20678 s, 10.1 MB/s 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:27.135 128+0 records in 00:09:27.135 128+0 records out 00:09:27.135 65536 bytes (66 kB, 64 KiB) copied, 0.00164986 s, 39.7 MB/s 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:27.135 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:27.136 2035+0 records in 00:09:27.136 2035+0 records out 00:09:27.136 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0187774 s, 55.5 MB/s 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:27.136 456+0 records in 00:09:27.136 456+0 records out 00:09:27.136 233472 bytes (233 kB, 228 KiB) copied, 0.00437751 s, 53.3 MB/s 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.136 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:27.396 [2024-10-01 13:43:37.467041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:27.396 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60323 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60323 ']' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60323 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60323 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.655 killing process with pid 60323 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60323' 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60323 00:09:27.655 [2024-10-01 13:43:37.800135] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.655 [2024-10-01 13:43:37.800244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.655 13:43:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60323 00:09:27.655 [2024-10-01 13:43:37.800295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.655 [2024-10-01 13:43:37.800309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:27.914 [2024-10-01 13:43:38.012015] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.294 13:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:29.294 00:09:29.294 real 0m4.088s 00:09:29.294 user 0m4.594s 00:09:29.294 sys 0m1.148s 00:09:29.294 13:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.294 13:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:29.294 ************************************ 00:09:29.294 END TEST raid_function_test_concat 00:09:29.294 ************************************ 00:09:29.294 13:43:39 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:29.294 13:43:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.294 13:43:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.294 13:43:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.294 ************************************ 00:09:29.294 START TEST raid0_resize_test 00:09:29.294 ************************************ 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60452 00:09:29.294 Process raid pid: 60452 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60452' 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60452 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60452 ']' 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.294 13:43:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.294 [2024-10-01 13:43:39.461272] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:29.294 [2024-10-01 13:43:39.461416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.552 [2024-10-01 13:43:39.633935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.812 [2024-10-01 13:43:39.845857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.071 [2024-10-01 13:43:40.062363] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.072 [2024-10-01 13:43:40.062416] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.332 Base_1 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.332 Base_2 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.332 [2024-10-01 13:43:40.322273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:30.332 [2024-10-01 13:43:40.324314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:30.332 [2024-10-01 13:43:40.324381] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:30.332 [2024-10-01 13:43:40.324406] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:30.332 [2024-10-01 13:43:40.324681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:30.332 [2024-10-01 13:43:40.324810] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:30.332 [2024-10-01 13:43:40.324829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:30.332 [2024-10-01 13:43:40.324997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.332 [2024-10-01 13:43:40.330187] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:30.332 [2024-10-01 13:43:40.330221] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:30.332 true 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.332 [2024-10-01 13:43:40.342350] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.332 [2024-10-01 13:43:40.390193] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:30.332 [2024-10-01 13:43:40.390239] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:30.332 [2024-10-01 13:43:40.390294] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:30.332 true 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.332 [2024-10-01 13:43:40.402312] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60452 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60452 ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60452 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60452 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.332 killing process with pid 60452 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60452' 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60452 00:09:30.332 [2024-10-01 13:43:40.488532] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.332 [2024-10-01 13:43:40.488640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.332 13:43:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60452 00:09:30.332 [2024-10-01 13:43:40.488692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.332 [2024-10-01 13:43:40.488703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:30.332 [2024-10-01 13:43:40.506195] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.759 13:43:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:31.759 00:09:31.759 real 0m2.424s 00:09:31.759 user 0m2.525s 00:09:31.759 sys 0m0.412s 00:09:31.759 13:43:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.759 ************************************ 00:09:31.759 END TEST raid0_resize_test 00:09:31.759 13:43:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.759 ************************************ 00:09:31.759 13:43:41 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:31.759 13:43:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:31.759 13:43:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.759 13:43:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.759 ************************************ 00:09:31.759 START TEST raid1_resize_test 00:09:31.759 ************************************ 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60508 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.759 Process raid pid: 60508 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60508' 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60508 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60508 ']' 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.759 13:43:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.019 [2024-10-01 13:43:41.965990] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:32.019 [2024-10-01 13:43:41.966123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.019 [2024-10-01 13:43:42.137533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.278 [2024-10-01 13:43:42.358507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.535 [2024-10-01 13:43:42.582210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.535 [2024-10-01 13:43:42.582264] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.793 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.793 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:09:32.793 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:32.793 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.793 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.793 Base_1 00:09:32.793 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.794 Base_2 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.794 [2024-10-01 13:43:42.830178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:32.794 [2024-10-01 13:43:42.832278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:32.794 [2024-10-01 13:43:42.832343] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:32.794 [2024-10-01 13:43:42.832357] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:32.794 [2024-10-01 13:43:42.832628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:32.794 [2024-10-01 13:43:42.832767] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:32.794 [2024-10-01 13:43:42.832784] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:32.794 [2024-10-01 13:43:42.832927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.794 [2024-10-01 13:43:42.838101] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:32.794 [2024-10-01 13:43:42.838135] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:32.794 true 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:32.794 [2024-10-01 13:43:42.850228] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.794 [2024-10-01 13:43:42.890073] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:32.794 [2024-10-01 13:43:42.890108] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:32.794 [2024-10-01 13:43:42.890143] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:32.794 true 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.794 [2024-10-01 13:43:42.906193] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60508 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60508 ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60508 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60508 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60508' 00:09:32.794 killing process with pid 60508 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60508 00:09:32.794 [2024-10-01 13:43:42.980121] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.794 [2024-10-01 13:43:42.980227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.794 13:43:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60508 00:09:32.794 [2024-10-01 13:43:42.980760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.794 [2024-10-01 13:43:42.980783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:33.053 [2024-10-01 13:43:42.998033] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.432 13:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:34.432 00:09:34.432 real 0m2.402s 00:09:34.432 user 0m2.488s 00:09:34.432 sys 0m0.391s 00:09:34.432 13:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.432 13:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.432 ************************************ 00:09:34.432 END TEST raid1_resize_test 00:09:34.432 ************************************ 00:09:34.433 13:43:44 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:34.433 13:43:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:34.433 13:43:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:34.433 13:43:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:34.433 13:43:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.433 13:43:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.433 ************************************ 00:09:34.433 START TEST raid_state_function_test 00:09:34.433 ************************************ 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60576 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:34.433 Process raid pid: 60576 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60576' 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60576 00:09:34.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60576 ']' 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.433 13:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.433 [2024-10-01 13:43:44.449253] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:34.433 [2024-10-01 13:43:44.449598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.433 [2024-10-01 13:43:44.621132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.753 [2024-10-01 13:43:44.841166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.012 [2024-10-01 13:43:45.047308] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.012 [2024-10-01 13:43:45.047567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.271 [2024-10-01 13:43:45.296268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.271 [2024-10-01 13:43:45.296326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.271 [2024-10-01 13:43:45.296342] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.271 [2024-10-01 13:43:45.296355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.271 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.271 "name": "Existed_Raid", 00:09:35.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.271 "strip_size_kb": 64, 00:09:35.271 "state": "configuring", 00:09:35.271 "raid_level": "raid0", 00:09:35.271 "superblock": false, 00:09:35.271 "num_base_bdevs": 2, 00:09:35.271 "num_base_bdevs_discovered": 0, 00:09:35.271 "num_base_bdevs_operational": 2, 00:09:35.271 "base_bdevs_list": [ 00:09:35.272 { 00:09:35.272 "name": "BaseBdev1", 00:09:35.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.272 "is_configured": false, 00:09:35.272 "data_offset": 0, 00:09:35.272 "data_size": 0 00:09:35.272 }, 00:09:35.272 { 00:09:35.272 "name": "BaseBdev2", 00:09:35.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.272 "is_configured": false, 00:09:35.272 "data_offset": 0, 00:09:35.272 "data_size": 0 00:09:35.272 } 00:09:35.272 ] 00:09:35.272 }' 00:09:35.272 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.272 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.840 [2024-10-01 13:43:45.739584] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.840 [2024-10-01 13:43:45.739743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.840 [2024-10-01 13:43:45.751574] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.840 [2024-10-01 13:43:45.751623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.840 [2024-10-01 13:43:45.751634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.840 [2024-10-01 13:43:45.751650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.840 [2024-10-01 13:43:45.808843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.840 BaseBdev1 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.840 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.840 [ 00:09:35.840 { 00:09:35.840 "name": "BaseBdev1", 00:09:35.840 "aliases": [ 00:09:35.840 "0e0c6470-d6cf-43e2-a6ce-46a2da8804d2" 00:09:35.840 ], 00:09:35.840 "product_name": "Malloc disk", 00:09:35.840 "block_size": 512, 00:09:35.840 "num_blocks": 65536, 00:09:35.840 "uuid": "0e0c6470-d6cf-43e2-a6ce-46a2da8804d2", 00:09:35.840 "assigned_rate_limits": { 00:09:35.840 "rw_ios_per_sec": 0, 00:09:35.840 "rw_mbytes_per_sec": 0, 00:09:35.840 "r_mbytes_per_sec": 0, 00:09:35.840 "w_mbytes_per_sec": 0 00:09:35.840 }, 00:09:35.840 "claimed": true, 00:09:35.840 "claim_type": "exclusive_write", 00:09:35.840 "zoned": false, 00:09:35.840 "supported_io_types": { 00:09:35.840 "read": true, 00:09:35.840 "write": true, 00:09:35.840 "unmap": true, 00:09:35.840 "flush": true, 00:09:35.840 "reset": true, 00:09:35.840 "nvme_admin": false, 00:09:35.840 "nvme_io": false, 00:09:35.840 "nvme_io_md": false, 00:09:35.840 "write_zeroes": true, 00:09:35.840 "zcopy": true, 00:09:35.840 "get_zone_info": false, 00:09:35.841 "zone_management": false, 00:09:35.841 "zone_append": false, 00:09:35.841 "compare": false, 00:09:35.841 "compare_and_write": false, 00:09:35.841 "abort": true, 00:09:35.841 "seek_hole": false, 00:09:35.841 "seek_data": false, 00:09:35.841 "copy": true, 00:09:35.841 "nvme_iov_md": false 00:09:35.841 }, 00:09:35.841 "memory_domains": [ 00:09:35.841 { 00:09:35.841 "dma_device_id": "system", 00:09:35.841 "dma_device_type": 1 00:09:35.841 }, 00:09:35.841 { 00:09:35.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.841 "dma_device_type": 2 00:09:35.841 } 00:09:35.841 ], 00:09:35.841 "driver_specific": {} 00:09:35.841 } 00:09:35.841 ] 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.841 "name": "Existed_Raid", 00:09:35.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.841 "strip_size_kb": 64, 00:09:35.841 "state": "configuring", 00:09:35.841 "raid_level": "raid0", 00:09:35.841 "superblock": false, 00:09:35.841 "num_base_bdevs": 2, 00:09:35.841 "num_base_bdevs_discovered": 1, 00:09:35.841 "num_base_bdevs_operational": 2, 00:09:35.841 "base_bdevs_list": [ 00:09:35.841 { 00:09:35.841 "name": "BaseBdev1", 00:09:35.841 "uuid": "0e0c6470-d6cf-43e2-a6ce-46a2da8804d2", 00:09:35.841 "is_configured": true, 00:09:35.841 "data_offset": 0, 00:09:35.841 "data_size": 65536 00:09:35.841 }, 00:09:35.841 { 00:09:35.841 "name": "BaseBdev2", 00:09:35.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.841 "is_configured": false, 00:09:35.841 "data_offset": 0, 00:09:35.841 "data_size": 0 00:09:35.841 } 00:09:35.841 ] 00:09:35.841 }' 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.841 13:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.100 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.101 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.101 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.101 [2024-10-01 13:43:46.276493] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.101 [2024-10-01 13:43:46.276552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:36.101 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.101 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:36.101 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.101 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.101 [2024-10-01 13:43:46.288538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.101 [2024-10-01 13:43:46.290787] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.101 [2024-10-01 13:43:46.290842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.359 "name": "Existed_Raid", 00:09:36.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.359 "strip_size_kb": 64, 00:09:36.359 "state": "configuring", 00:09:36.359 "raid_level": "raid0", 00:09:36.359 "superblock": false, 00:09:36.359 "num_base_bdevs": 2, 00:09:36.359 "num_base_bdevs_discovered": 1, 00:09:36.359 "num_base_bdevs_operational": 2, 00:09:36.359 "base_bdevs_list": [ 00:09:36.359 { 00:09:36.359 "name": "BaseBdev1", 00:09:36.359 "uuid": "0e0c6470-d6cf-43e2-a6ce-46a2da8804d2", 00:09:36.359 "is_configured": true, 00:09:36.359 "data_offset": 0, 00:09:36.359 "data_size": 65536 00:09:36.359 }, 00:09:36.359 { 00:09:36.359 "name": "BaseBdev2", 00:09:36.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.359 "is_configured": false, 00:09:36.359 "data_offset": 0, 00:09:36.359 "data_size": 0 00:09:36.359 } 00:09:36.359 ] 00:09:36.359 }' 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.359 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.618 [2024-10-01 13:43:46.780525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.618 [2024-10-01 13:43:46.780759] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:36.618 [2024-10-01 13:43:46.780809] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:36.618 [2024-10-01 13:43:46.781112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:36.618 [2024-10-01 13:43:46.781278] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:36.618 [2024-10-01 13:43:46.781297] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:36.618 [2024-10-01 13:43:46.781584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.618 BaseBdev2 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.618 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.618 [ 00:09:36.618 { 00:09:36.618 "name": "BaseBdev2", 00:09:36.618 "aliases": [ 00:09:36.877 "7ebe17f4-e9df-4f5c-93d3-8aabf0a591ab" 00:09:36.877 ], 00:09:36.877 "product_name": "Malloc disk", 00:09:36.877 "block_size": 512, 00:09:36.877 "num_blocks": 65536, 00:09:36.877 "uuid": "7ebe17f4-e9df-4f5c-93d3-8aabf0a591ab", 00:09:36.877 "assigned_rate_limits": { 00:09:36.877 "rw_ios_per_sec": 0, 00:09:36.877 "rw_mbytes_per_sec": 0, 00:09:36.877 "r_mbytes_per_sec": 0, 00:09:36.877 "w_mbytes_per_sec": 0 00:09:36.877 }, 00:09:36.877 "claimed": true, 00:09:36.877 "claim_type": "exclusive_write", 00:09:36.877 "zoned": false, 00:09:36.877 "supported_io_types": { 00:09:36.877 "read": true, 00:09:36.877 "write": true, 00:09:36.877 "unmap": true, 00:09:36.877 "flush": true, 00:09:36.877 "reset": true, 00:09:36.877 "nvme_admin": false, 00:09:36.877 "nvme_io": false, 00:09:36.877 "nvme_io_md": false, 00:09:36.877 "write_zeroes": true, 00:09:36.877 "zcopy": true, 00:09:36.877 "get_zone_info": false, 00:09:36.877 "zone_management": false, 00:09:36.877 "zone_append": false, 00:09:36.877 "compare": false, 00:09:36.877 "compare_and_write": false, 00:09:36.877 "abort": true, 00:09:36.877 "seek_hole": false, 00:09:36.877 "seek_data": false, 00:09:36.877 "copy": true, 00:09:36.877 "nvme_iov_md": false 00:09:36.877 }, 00:09:36.877 "memory_domains": [ 00:09:36.877 { 00:09:36.877 "dma_device_id": "system", 00:09:36.877 "dma_device_type": 1 00:09:36.877 }, 00:09:36.877 { 00:09:36.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.877 "dma_device_type": 2 00:09:36.877 } 00:09:36.877 ], 00:09:36.877 "driver_specific": {} 00:09:36.877 } 00:09:36.877 ] 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.877 "name": "Existed_Raid", 00:09:36.877 "uuid": "facfc279-5134-4cfa-bdbd-6d389e21b6f7", 00:09:36.877 "strip_size_kb": 64, 00:09:36.877 "state": "online", 00:09:36.877 "raid_level": "raid0", 00:09:36.877 "superblock": false, 00:09:36.877 "num_base_bdevs": 2, 00:09:36.877 "num_base_bdevs_discovered": 2, 00:09:36.877 "num_base_bdevs_operational": 2, 00:09:36.877 "base_bdevs_list": [ 00:09:36.877 { 00:09:36.877 "name": "BaseBdev1", 00:09:36.877 "uuid": "0e0c6470-d6cf-43e2-a6ce-46a2da8804d2", 00:09:36.877 "is_configured": true, 00:09:36.877 "data_offset": 0, 00:09:36.877 "data_size": 65536 00:09:36.877 }, 00:09:36.877 { 00:09:36.877 "name": "BaseBdev2", 00:09:36.877 "uuid": "7ebe17f4-e9df-4f5c-93d3-8aabf0a591ab", 00:09:36.877 "is_configured": true, 00:09:36.877 "data_offset": 0, 00:09:36.877 "data_size": 65536 00:09:36.877 } 00:09:36.877 ] 00:09:36.877 }' 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.877 13:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.136 [2024-10-01 13:43:47.244254] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.136 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.136 "name": "Existed_Raid", 00:09:37.136 "aliases": [ 00:09:37.136 "facfc279-5134-4cfa-bdbd-6d389e21b6f7" 00:09:37.136 ], 00:09:37.136 "product_name": "Raid Volume", 00:09:37.136 "block_size": 512, 00:09:37.136 "num_blocks": 131072, 00:09:37.136 "uuid": "facfc279-5134-4cfa-bdbd-6d389e21b6f7", 00:09:37.136 "assigned_rate_limits": { 00:09:37.136 "rw_ios_per_sec": 0, 00:09:37.136 "rw_mbytes_per_sec": 0, 00:09:37.136 "r_mbytes_per_sec": 0, 00:09:37.136 "w_mbytes_per_sec": 0 00:09:37.136 }, 00:09:37.136 "claimed": false, 00:09:37.136 "zoned": false, 00:09:37.136 "supported_io_types": { 00:09:37.136 "read": true, 00:09:37.136 "write": true, 00:09:37.136 "unmap": true, 00:09:37.136 "flush": true, 00:09:37.136 "reset": true, 00:09:37.136 "nvme_admin": false, 00:09:37.136 "nvme_io": false, 00:09:37.136 "nvme_io_md": false, 00:09:37.137 "write_zeroes": true, 00:09:37.137 "zcopy": false, 00:09:37.137 "get_zone_info": false, 00:09:37.137 "zone_management": false, 00:09:37.137 "zone_append": false, 00:09:37.137 "compare": false, 00:09:37.137 "compare_and_write": false, 00:09:37.137 "abort": false, 00:09:37.137 "seek_hole": false, 00:09:37.137 "seek_data": false, 00:09:37.137 "copy": false, 00:09:37.137 "nvme_iov_md": false 00:09:37.137 }, 00:09:37.137 "memory_domains": [ 00:09:37.137 { 00:09:37.137 "dma_device_id": "system", 00:09:37.137 "dma_device_type": 1 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.137 "dma_device_type": 2 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "system", 00:09:37.137 "dma_device_type": 1 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.137 "dma_device_type": 2 00:09:37.137 } 00:09:37.137 ], 00:09:37.137 "driver_specific": { 00:09:37.137 "raid": { 00:09:37.137 "uuid": "facfc279-5134-4cfa-bdbd-6d389e21b6f7", 00:09:37.137 "strip_size_kb": 64, 00:09:37.137 "state": "online", 00:09:37.137 "raid_level": "raid0", 00:09:37.137 "superblock": false, 00:09:37.137 "num_base_bdevs": 2, 00:09:37.137 "num_base_bdevs_discovered": 2, 00:09:37.137 "num_base_bdevs_operational": 2, 00:09:37.137 "base_bdevs_list": [ 00:09:37.137 { 00:09:37.137 "name": "BaseBdev1", 00:09:37.137 "uuid": "0e0c6470-d6cf-43e2-a6ce-46a2da8804d2", 00:09:37.137 "is_configured": true, 00:09:37.137 "data_offset": 0, 00:09:37.137 "data_size": 65536 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "name": "BaseBdev2", 00:09:37.137 "uuid": "7ebe17f4-e9df-4f5c-93d3-8aabf0a591ab", 00:09:37.137 "is_configured": true, 00:09:37.137 "data_offset": 0, 00:09:37.137 "data_size": 65536 00:09:37.137 } 00:09:37.137 ] 00:09:37.137 } 00:09:37.137 } 00:09:37.137 }' 00:09:37.137 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.137 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:37.137 BaseBdev2' 00:09:37.137 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.397 [2024-10-01 13:43:47.467718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.397 [2024-10-01 13:43:47.467754] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.397 [2024-10-01 13:43:47.467811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.397 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.398 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.657 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.657 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.657 "name": "Existed_Raid", 00:09:37.657 "uuid": "facfc279-5134-4cfa-bdbd-6d389e21b6f7", 00:09:37.657 "strip_size_kb": 64, 00:09:37.658 "state": "offline", 00:09:37.658 "raid_level": "raid0", 00:09:37.658 "superblock": false, 00:09:37.658 "num_base_bdevs": 2, 00:09:37.658 "num_base_bdevs_discovered": 1, 00:09:37.658 "num_base_bdevs_operational": 1, 00:09:37.658 "base_bdevs_list": [ 00:09:37.658 { 00:09:37.658 "name": null, 00:09:37.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.658 "is_configured": false, 00:09:37.658 "data_offset": 0, 00:09:37.658 "data_size": 65536 00:09:37.658 }, 00:09:37.658 { 00:09:37.658 "name": "BaseBdev2", 00:09:37.658 "uuid": "7ebe17f4-e9df-4f5c-93d3-8aabf0a591ab", 00:09:37.658 "is_configured": true, 00:09:37.658 "data_offset": 0, 00:09:37.658 "data_size": 65536 00:09:37.658 } 00:09:37.658 ] 00:09:37.658 }' 00:09:37.658 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.658 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.916 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:37.916 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.916 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.916 13:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.916 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.916 13:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.916 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.916 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.916 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.916 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.916 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.916 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.916 [2024-10-01 13:43:48.037349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.916 [2024-10-01 13:43:48.037433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60576 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60576 ']' 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60576 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60576 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.176 killing process with pid 60576 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60576' 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60576 00:09:38.176 [2024-10-01 13:43:48.233041] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.176 13:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60576 00:09:38.176 [2024-10-01 13:43:48.250788] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:39.554 00:09:39.554 real 0m5.214s 00:09:39.554 user 0m7.364s 00:09:39.554 sys 0m0.924s 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.554 ************************************ 00:09:39.554 END TEST raid_state_function_test 00:09:39.554 ************************************ 00:09:39.554 13:43:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:39.554 13:43:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:39.554 13:43:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.554 13:43:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.554 ************************************ 00:09:39.554 START TEST raid_state_function_test_sb 00:09:39.554 ************************************ 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60824 00:09:39.554 Process raid pid: 60824 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60824' 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60824 00:09:39.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60824 ']' 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.554 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.554 [2024-10-01 13:43:49.734494] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:39.554 [2024-10-01 13:43:49.734875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.813 [2024-10-01 13:43:49.911450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.073 [2024-10-01 13:43:50.166923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.332 [2024-10-01 13:43:50.390128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.332 [2024-10-01 13:43:50.390168] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.590 [2024-10-01 13:43:50.667562] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.590 [2024-10-01 13:43:50.667625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.590 [2024-10-01 13:43:50.667641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.590 [2024-10-01 13:43:50.667655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.590 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.590 "name": "Existed_Raid", 00:09:40.590 "uuid": "be69524f-cb62-4666-a30a-f1554be7a241", 00:09:40.590 "strip_size_kb": 64, 00:09:40.590 "state": "configuring", 00:09:40.590 "raid_level": "raid0", 00:09:40.590 "superblock": true, 00:09:40.590 "num_base_bdevs": 2, 00:09:40.590 "num_base_bdevs_discovered": 0, 00:09:40.590 "num_base_bdevs_operational": 2, 00:09:40.590 "base_bdevs_list": [ 00:09:40.590 { 00:09:40.590 "name": "BaseBdev1", 00:09:40.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.590 "is_configured": false, 00:09:40.590 "data_offset": 0, 00:09:40.590 "data_size": 0 00:09:40.590 }, 00:09:40.590 { 00:09:40.590 "name": "BaseBdev2", 00:09:40.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.590 "is_configured": false, 00:09:40.590 "data_offset": 0, 00:09:40.590 "data_size": 0 00:09:40.590 } 00:09:40.591 ] 00:09:40.591 }' 00:09:40.591 13:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.591 13:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 [2024-10-01 13:43:51.099270] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.159 [2024-10-01 13:43:51.099317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 [2024-10-01 13:43:51.107305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.159 [2024-10-01 13:43:51.107358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.159 [2024-10-01 13:43:51.107370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.159 [2024-10-01 13:43:51.107387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 [2024-10-01 13:43:51.166642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.159 BaseBdev1 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.159 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 [ 00:09:41.159 { 00:09:41.159 "name": "BaseBdev1", 00:09:41.159 "aliases": [ 00:09:41.159 "e3aa32fa-a5a7-43af-8b13-7144fac57bc3" 00:09:41.159 ], 00:09:41.159 "product_name": "Malloc disk", 00:09:41.159 "block_size": 512, 00:09:41.159 "num_blocks": 65536, 00:09:41.159 "uuid": "e3aa32fa-a5a7-43af-8b13-7144fac57bc3", 00:09:41.159 "assigned_rate_limits": { 00:09:41.159 "rw_ios_per_sec": 0, 00:09:41.159 "rw_mbytes_per_sec": 0, 00:09:41.159 "r_mbytes_per_sec": 0, 00:09:41.159 "w_mbytes_per_sec": 0 00:09:41.159 }, 00:09:41.159 "claimed": true, 00:09:41.159 "claim_type": "exclusive_write", 00:09:41.159 "zoned": false, 00:09:41.159 "supported_io_types": { 00:09:41.159 "read": true, 00:09:41.159 "write": true, 00:09:41.160 "unmap": true, 00:09:41.160 "flush": true, 00:09:41.160 "reset": true, 00:09:41.160 "nvme_admin": false, 00:09:41.160 "nvme_io": false, 00:09:41.160 "nvme_io_md": false, 00:09:41.160 "write_zeroes": true, 00:09:41.160 "zcopy": true, 00:09:41.160 "get_zone_info": false, 00:09:41.160 "zone_management": false, 00:09:41.160 "zone_append": false, 00:09:41.160 "compare": false, 00:09:41.160 "compare_and_write": false, 00:09:41.160 "abort": true, 00:09:41.160 "seek_hole": false, 00:09:41.160 "seek_data": false, 00:09:41.160 "copy": true, 00:09:41.160 "nvme_iov_md": false 00:09:41.160 }, 00:09:41.160 "memory_domains": [ 00:09:41.160 { 00:09:41.160 "dma_device_id": "system", 00:09:41.160 "dma_device_type": 1 00:09:41.160 }, 00:09:41.160 { 00:09:41.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.160 "dma_device_type": 2 00:09:41.160 } 00:09:41.160 ], 00:09:41.160 "driver_specific": {} 00:09:41.160 } 00:09:41.160 ] 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.160 "name": "Existed_Raid", 00:09:41.160 "uuid": "c3f47fc2-27f3-483a-90dc-40a053910efc", 00:09:41.160 "strip_size_kb": 64, 00:09:41.160 "state": "configuring", 00:09:41.160 "raid_level": "raid0", 00:09:41.160 "superblock": true, 00:09:41.160 "num_base_bdevs": 2, 00:09:41.160 "num_base_bdevs_discovered": 1, 00:09:41.160 "num_base_bdevs_operational": 2, 00:09:41.160 "base_bdevs_list": [ 00:09:41.160 { 00:09:41.160 "name": "BaseBdev1", 00:09:41.160 "uuid": "e3aa32fa-a5a7-43af-8b13-7144fac57bc3", 00:09:41.160 "is_configured": true, 00:09:41.160 "data_offset": 2048, 00:09:41.160 "data_size": 63488 00:09:41.160 }, 00:09:41.160 { 00:09:41.160 "name": "BaseBdev2", 00:09:41.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.160 "is_configured": false, 00:09:41.160 "data_offset": 0, 00:09:41.160 "data_size": 0 00:09:41.160 } 00:09:41.160 ] 00:09:41.160 }' 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.160 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.418 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.418 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.419 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.419 [2024-10-01 13:43:51.602161] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.419 [2024-10-01 13:43:51.602227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:41.419 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.419 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:41.419 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.419 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.678 [2024-10-01 13:43:51.614209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.678 [2024-10-01 13:43:51.616463] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.678 [2024-10-01 13:43:51.616517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.678 "name": "Existed_Raid", 00:09:41.678 "uuid": "117684a8-a524-47b3-807f-6d9c8772203d", 00:09:41.678 "strip_size_kb": 64, 00:09:41.678 "state": "configuring", 00:09:41.678 "raid_level": "raid0", 00:09:41.678 "superblock": true, 00:09:41.678 "num_base_bdevs": 2, 00:09:41.678 "num_base_bdevs_discovered": 1, 00:09:41.678 "num_base_bdevs_operational": 2, 00:09:41.678 "base_bdevs_list": [ 00:09:41.678 { 00:09:41.678 "name": "BaseBdev1", 00:09:41.678 "uuid": "e3aa32fa-a5a7-43af-8b13-7144fac57bc3", 00:09:41.678 "is_configured": true, 00:09:41.678 "data_offset": 2048, 00:09:41.678 "data_size": 63488 00:09:41.678 }, 00:09:41.678 { 00:09:41.678 "name": "BaseBdev2", 00:09:41.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.678 "is_configured": false, 00:09:41.678 "data_offset": 0, 00:09:41.678 "data_size": 0 00:09:41.678 } 00:09:41.678 ] 00:09:41.678 }' 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.678 13:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 [2024-10-01 13:43:52.054557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.961 [2024-10-01 13:43:52.055120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.961 [2024-10-01 13:43:52.055301] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:41.961 [2024-10-01 13:43:52.055671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.961 BaseBdev2 00:09:41.961 [2024-10-01 13:43:52.055867] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.961 [2024-10-01 13:43:52.055886] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:41.961 [2024-10-01 13:43:52.056048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.961 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 [ 00:09:41.961 { 00:09:41.961 "name": "BaseBdev2", 00:09:41.961 "aliases": [ 00:09:41.961 "f661a937-f3ff-4a76-942b-0aca594ba3ce" 00:09:41.961 ], 00:09:41.961 "product_name": "Malloc disk", 00:09:41.961 "block_size": 512, 00:09:41.961 "num_blocks": 65536, 00:09:41.961 "uuid": "f661a937-f3ff-4a76-942b-0aca594ba3ce", 00:09:41.961 "assigned_rate_limits": { 00:09:41.961 "rw_ios_per_sec": 0, 00:09:41.962 "rw_mbytes_per_sec": 0, 00:09:41.962 "r_mbytes_per_sec": 0, 00:09:41.962 "w_mbytes_per_sec": 0 00:09:41.962 }, 00:09:41.962 "claimed": true, 00:09:41.962 "claim_type": "exclusive_write", 00:09:41.962 "zoned": false, 00:09:41.962 "supported_io_types": { 00:09:41.962 "read": true, 00:09:41.962 "write": true, 00:09:41.962 "unmap": true, 00:09:41.962 "flush": true, 00:09:41.962 "reset": true, 00:09:41.962 "nvme_admin": false, 00:09:41.962 "nvme_io": false, 00:09:41.962 "nvme_io_md": false, 00:09:41.962 "write_zeroes": true, 00:09:41.962 "zcopy": true, 00:09:41.962 "get_zone_info": false, 00:09:41.962 "zone_management": false, 00:09:41.962 "zone_append": false, 00:09:41.962 "compare": false, 00:09:41.962 "compare_and_write": false, 00:09:41.962 "abort": true, 00:09:41.962 "seek_hole": false, 00:09:41.962 "seek_data": false, 00:09:41.962 "copy": true, 00:09:41.962 "nvme_iov_md": false 00:09:41.962 }, 00:09:41.962 "memory_domains": [ 00:09:41.962 { 00:09:41.962 "dma_device_id": "system", 00:09:41.962 "dma_device_type": 1 00:09:41.962 }, 00:09:41.962 { 00:09:41.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.962 "dma_device_type": 2 00:09:41.962 } 00:09:41.962 ], 00:09:41.962 "driver_specific": {} 00:09:41.962 } 00:09:41.962 ] 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.962 "name": "Existed_Raid", 00:09:41.962 "uuid": "117684a8-a524-47b3-807f-6d9c8772203d", 00:09:41.962 "strip_size_kb": 64, 00:09:41.962 "state": "online", 00:09:41.962 "raid_level": "raid0", 00:09:41.962 "superblock": true, 00:09:41.962 "num_base_bdevs": 2, 00:09:41.962 "num_base_bdevs_discovered": 2, 00:09:41.962 "num_base_bdevs_operational": 2, 00:09:41.962 "base_bdevs_list": [ 00:09:41.962 { 00:09:41.962 "name": "BaseBdev1", 00:09:41.962 "uuid": "e3aa32fa-a5a7-43af-8b13-7144fac57bc3", 00:09:41.962 "is_configured": true, 00:09:41.962 "data_offset": 2048, 00:09:41.962 "data_size": 63488 00:09:41.962 }, 00:09:41.962 { 00:09:41.962 "name": "BaseBdev2", 00:09:41.962 "uuid": "f661a937-f3ff-4a76-942b-0aca594ba3ce", 00:09:41.962 "is_configured": true, 00:09:41.962 "data_offset": 2048, 00:09:41.962 "data_size": 63488 00:09:41.962 } 00:09:41.962 ] 00:09:41.962 }' 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.962 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.530 [2024-10-01 13:43:52.514252] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.530 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.530 "name": "Existed_Raid", 00:09:42.530 "aliases": [ 00:09:42.530 "117684a8-a524-47b3-807f-6d9c8772203d" 00:09:42.530 ], 00:09:42.530 "product_name": "Raid Volume", 00:09:42.530 "block_size": 512, 00:09:42.530 "num_blocks": 126976, 00:09:42.531 "uuid": "117684a8-a524-47b3-807f-6d9c8772203d", 00:09:42.531 "assigned_rate_limits": { 00:09:42.531 "rw_ios_per_sec": 0, 00:09:42.531 "rw_mbytes_per_sec": 0, 00:09:42.531 "r_mbytes_per_sec": 0, 00:09:42.531 "w_mbytes_per_sec": 0 00:09:42.531 }, 00:09:42.531 "claimed": false, 00:09:42.531 "zoned": false, 00:09:42.531 "supported_io_types": { 00:09:42.531 "read": true, 00:09:42.531 "write": true, 00:09:42.531 "unmap": true, 00:09:42.531 "flush": true, 00:09:42.531 "reset": true, 00:09:42.531 "nvme_admin": false, 00:09:42.531 "nvme_io": false, 00:09:42.531 "nvme_io_md": false, 00:09:42.531 "write_zeroes": true, 00:09:42.531 "zcopy": false, 00:09:42.531 "get_zone_info": false, 00:09:42.531 "zone_management": false, 00:09:42.531 "zone_append": false, 00:09:42.531 "compare": false, 00:09:42.531 "compare_and_write": false, 00:09:42.531 "abort": false, 00:09:42.531 "seek_hole": false, 00:09:42.531 "seek_data": false, 00:09:42.531 "copy": false, 00:09:42.531 "nvme_iov_md": false 00:09:42.531 }, 00:09:42.531 "memory_domains": [ 00:09:42.531 { 00:09:42.531 "dma_device_id": "system", 00:09:42.531 "dma_device_type": 1 00:09:42.531 }, 00:09:42.531 { 00:09:42.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.531 "dma_device_type": 2 00:09:42.531 }, 00:09:42.531 { 00:09:42.531 "dma_device_id": "system", 00:09:42.531 "dma_device_type": 1 00:09:42.531 }, 00:09:42.531 { 00:09:42.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.531 "dma_device_type": 2 00:09:42.531 } 00:09:42.531 ], 00:09:42.531 "driver_specific": { 00:09:42.531 "raid": { 00:09:42.531 "uuid": "117684a8-a524-47b3-807f-6d9c8772203d", 00:09:42.531 "strip_size_kb": 64, 00:09:42.531 "state": "online", 00:09:42.531 "raid_level": "raid0", 00:09:42.531 "superblock": true, 00:09:42.531 "num_base_bdevs": 2, 00:09:42.531 "num_base_bdevs_discovered": 2, 00:09:42.531 "num_base_bdevs_operational": 2, 00:09:42.531 "base_bdevs_list": [ 00:09:42.531 { 00:09:42.531 "name": "BaseBdev1", 00:09:42.531 "uuid": "e3aa32fa-a5a7-43af-8b13-7144fac57bc3", 00:09:42.531 "is_configured": true, 00:09:42.531 "data_offset": 2048, 00:09:42.531 "data_size": 63488 00:09:42.531 }, 00:09:42.531 { 00:09:42.531 "name": "BaseBdev2", 00:09:42.531 "uuid": "f661a937-f3ff-4a76-942b-0aca594ba3ce", 00:09:42.531 "is_configured": true, 00:09:42.531 "data_offset": 2048, 00:09:42.531 "data_size": 63488 00:09:42.531 } 00:09:42.531 ] 00:09:42.531 } 00:09:42.531 } 00:09:42.531 }' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:42.531 BaseBdev2' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.531 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.790 [2024-10-01 13:43:52.753691] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.790 [2024-10-01 13:43:52.753728] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.790 [2024-10-01 13:43:52.753785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.790 "name": "Existed_Raid", 00:09:42.790 "uuid": "117684a8-a524-47b3-807f-6d9c8772203d", 00:09:42.790 "strip_size_kb": 64, 00:09:42.790 "state": "offline", 00:09:42.790 "raid_level": "raid0", 00:09:42.790 "superblock": true, 00:09:42.790 "num_base_bdevs": 2, 00:09:42.790 "num_base_bdevs_discovered": 1, 00:09:42.790 "num_base_bdevs_operational": 1, 00:09:42.790 "base_bdevs_list": [ 00:09:42.790 { 00:09:42.790 "name": null, 00:09:42.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.790 "is_configured": false, 00:09:42.790 "data_offset": 0, 00:09:42.790 "data_size": 63488 00:09:42.790 }, 00:09:42.790 { 00:09:42.790 "name": "BaseBdev2", 00:09:42.790 "uuid": "f661a937-f3ff-4a76-942b-0aca594ba3ce", 00:09:42.790 "is_configured": true, 00:09:42.790 "data_offset": 2048, 00:09:42.790 "data_size": 63488 00:09:42.790 } 00:09:42.790 ] 00:09:42.790 }' 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.790 13:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.358 [2024-10-01 13:43:53.328789] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.358 [2024-10-01 13:43:53.328885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60824 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60824 ']' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60824 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60824 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60824' 00:09:43.358 killing process with pid 60824 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60824 00:09:43.358 [2024-10-01 13:43:53.539423] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.358 13:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60824 00:09:43.617 [2024-10-01 13:43:53.559120] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.994 13:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.994 00:09:44.994 real 0m5.366s 00:09:44.994 user 0m7.486s 00:09:44.994 sys 0m0.941s 00:09:44.994 13:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.994 13:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.994 ************************************ 00:09:44.994 END TEST raid_state_function_test_sb 00:09:44.994 ************************************ 00:09:44.994 13:43:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:44.994 13:43:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:44.994 13:43:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.994 13:43:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.994 ************************************ 00:09:44.994 START TEST raid_superblock_test 00:09:44.994 ************************************ 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61081 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61081 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61081 ']' 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.994 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.994 [2024-10-01 13:43:55.176340] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:44.994 [2024-10-01 13:43:55.176682] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61081 ] 00:09:45.254 [2024-10-01 13:43:55.334745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.513 [2024-10-01 13:43:55.604793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.772 [2024-10-01 13:43:55.833298] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.772 [2024-10-01 13:43:55.833367] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.031 malloc1 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.031 [2024-10-01 13:43:56.118075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.031 [2024-10-01 13:43:56.118185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.031 [2024-10-01 13:43:56.118216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:46.031 [2024-10-01 13:43:56.118236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.031 [2024-10-01 13:43:56.121421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.031 [2024-10-01 13:43:56.121602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.031 pt1 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.031 malloc2 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.031 [2024-10-01 13:43:56.199004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.031 [2024-10-01 13:43:56.199368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.031 [2024-10-01 13:43:56.199477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:46.031 [2024-10-01 13:43:56.199618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.031 [2024-10-01 13:43:56.202689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.031 [2024-10-01 13:43:56.202852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.031 pt2 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.031 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.032 [2024-10-01 13:43:56.211280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.032 [2024-10-01 13:43:56.213968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.032 [2024-10-01 13:43:56.214295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:46.032 [2024-10-01 13:43:56.214420] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:46.032 [2024-10-01 13:43:56.214837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:46.032 [2024-10-01 13:43:56.215136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:46.032 [2024-10-01 13:43:56.215259] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:46.032 [2024-10-01 13:43:56.215615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.032 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.290 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.290 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.290 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.290 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.290 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.290 "name": "raid_bdev1", 00:09:46.290 "uuid": "27c2acb5-b31a-4cae-af45-51072f3a11d4", 00:09:46.290 "strip_size_kb": 64, 00:09:46.290 "state": "online", 00:09:46.290 "raid_level": "raid0", 00:09:46.290 "superblock": true, 00:09:46.290 "num_base_bdevs": 2, 00:09:46.290 "num_base_bdevs_discovered": 2, 00:09:46.290 "num_base_bdevs_operational": 2, 00:09:46.290 "base_bdevs_list": [ 00:09:46.290 { 00:09:46.290 "name": "pt1", 00:09:46.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.290 "is_configured": true, 00:09:46.290 "data_offset": 2048, 00:09:46.290 "data_size": 63488 00:09:46.290 }, 00:09:46.290 { 00:09:46.290 "name": "pt2", 00:09:46.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.290 "is_configured": true, 00:09:46.290 "data_offset": 2048, 00:09:46.290 "data_size": 63488 00:09:46.290 } 00:09:46.290 ] 00:09:46.290 }' 00:09:46.290 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.290 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.549 [2024-10-01 13:43:56.607730] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.549 "name": "raid_bdev1", 00:09:46.549 "aliases": [ 00:09:46.549 "27c2acb5-b31a-4cae-af45-51072f3a11d4" 00:09:46.549 ], 00:09:46.549 "product_name": "Raid Volume", 00:09:46.549 "block_size": 512, 00:09:46.549 "num_blocks": 126976, 00:09:46.549 "uuid": "27c2acb5-b31a-4cae-af45-51072f3a11d4", 00:09:46.549 "assigned_rate_limits": { 00:09:46.549 "rw_ios_per_sec": 0, 00:09:46.549 "rw_mbytes_per_sec": 0, 00:09:46.549 "r_mbytes_per_sec": 0, 00:09:46.549 "w_mbytes_per_sec": 0 00:09:46.549 }, 00:09:46.549 "claimed": false, 00:09:46.549 "zoned": false, 00:09:46.549 "supported_io_types": { 00:09:46.549 "read": true, 00:09:46.549 "write": true, 00:09:46.549 "unmap": true, 00:09:46.549 "flush": true, 00:09:46.549 "reset": true, 00:09:46.549 "nvme_admin": false, 00:09:46.549 "nvme_io": false, 00:09:46.549 "nvme_io_md": false, 00:09:46.549 "write_zeroes": true, 00:09:46.549 "zcopy": false, 00:09:46.549 "get_zone_info": false, 00:09:46.549 "zone_management": false, 00:09:46.549 "zone_append": false, 00:09:46.549 "compare": false, 00:09:46.549 "compare_and_write": false, 00:09:46.549 "abort": false, 00:09:46.549 "seek_hole": false, 00:09:46.549 "seek_data": false, 00:09:46.549 "copy": false, 00:09:46.549 "nvme_iov_md": false 00:09:46.549 }, 00:09:46.549 "memory_domains": [ 00:09:46.549 { 00:09:46.549 "dma_device_id": "system", 00:09:46.549 "dma_device_type": 1 00:09:46.549 }, 00:09:46.549 { 00:09:46.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.549 "dma_device_type": 2 00:09:46.549 }, 00:09:46.549 { 00:09:46.549 "dma_device_id": "system", 00:09:46.549 "dma_device_type": 1 00:09:46.549 }, 00:09:46.549 { 00:09:46.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.549 "dma_device_type": 2 00:09:46.549 } 00:09:46.549 ], 00:09:46.549 "driver_specific": { 00:09:46.549 "raid": { 00:09:46.549 "uuid": "27c2acb5-b31a-4cae-af45-51072f3a11d4", 00:09:46.549 "strip_size_kb": 64, 00:09:46.549 "state": "online", 00:09:46.549 "raid_level": "raid0", 00:09:46.549 "superblock": true, 00:09:46.549 "num_base_bdevs": 2, 00:09:46.549 "num_base_bdevs_discovered": 2, 00:09:46.549 "num_base_bdevs_operational": 2, 00:09:46.549 "base_bdevs_list": [ 00:09:46.549 { 00:09:46.549 "name": "pt1", 00:09:46.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.549 "is_configured": true, 00:09:46.549 "data_offset": 2048, 00:09:46.549 "data_size": 63488 00:09:46.549 }, 00:09:46.549 { 00:09:46.549 "name": "pt2", 00:09:46.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.549 "is_configured": true, 00:09:46.549 "data_offset": 2048, 00:09:46.549 "data_size": 63488 00:09:46.549 } 00:09:46.549 ] 00:09:46.549 } 00:09:46.549 } 00:09:46.549 }' 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:46.549 pt2' 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.549 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 [2024-10-01 13:43:56.823696] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27c2acb5-b31a-4cae-af45-51072f3a11d4 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 27c2acb5-b31a-4cae-af45-51072f3a11d4 ']' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 [2024-10-01 13:43:56.871327] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.808 [2024-10-01 13:43:56.871617] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.808 [2024-10-01 13:43:56.871776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.808 [2024-10-01 13:43:56.871842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.808 [2024-10-01 13:43:56.871861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.808 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.067 [2024-10-01 13:43:56.999363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:47.067 [2024-10-01 13:43:57.002037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:47.067 [2024-10-01 13:43:57.002143] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:47.067 [2024-10-01 13:43:57.002226] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:47.067 [2024-10-01 13:43:57.002248] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.067 [2024-10-01 13:43:57.002264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:47.067 request: 00:09:47.067 { 00:09:47.067 "name": "raid_bdev1", 00:09:47.067 "raid_level": "raid0", 00:09:47.067 "base_bdevs": [ 00:09:47.067 "malloc1", 00:09:47.067 "malloc2" 00:09:47.067 ], 00:09:47.067 "strip_size_kb": 64, 00:09:47.067 "superblock": false, 00:09:47.067 "method": "bdev_raid_create", 00:09:47.067 "req_id": 1 00:09:47.067 } 00:09:47.067 Got JSON-RPC error response 00:09:47.067 response: 00:09:47.067 { 00:09:47.067 "code": -17, 00:09:47.067 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:47.067 } 00:09:47.067 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:47.067 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:47.067 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.068 [2024-10-01 13:43:57.063268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.068 [2024-10-01 13:43:57.063592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.068 [2024-10-01 13:43:57.063670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:47.068 [2024-10-01 13:43:57.063766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.068 [2024-10-01 13:43:57.066822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.068 [2024-10-01 13:43:57.066990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.068 [2024-10-01 13:43:57.067216] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:47.068 [2024-10-01 13:43:57.067445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.068 pt1 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.068 "name": "raid_bdev1", 00:09:47.068 "uuid": "27c2acb5-b31a-4cae-af45-51072f3a11d4", 00:09:47.068 "strip_size_kb": 64, 00:09:47.068 "state": "configuring", 00:09:47.068 "raid_level": "raid0", 00:09:47.068 "superblock": true, 00:09:47.068 "num_base_bdevs": 2, 00:09:47.068 "num_base_bdevs_discovered": 1, 00:09:47.068 "num_base_bdevs_operational": 2, 00:09:47.068 "base_bdevs_list": [ 00:09:47.068 { 00:09:47.068 "name": "pt1", 00:09:47.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.068 "is_configured": true, 00:09:47.068 "data_offset": 2048, 00:09:47.068 "data_size": 63488 00:09:47.068 }, 00:09:47.068 { 00:09:47.068 "name": null, 00:09:47.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.068 "is_configured": false, 00:09:47.068 "data_offset": 2048, 00:09:47.068 "data_size": 63488 00:09:47.068 } 00:09:47.068 ] 00:09:47.068 }' 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.068 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.327 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:47.327 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:47.327 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.327 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.327 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.327 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.327 [2024-10-01 13:43:57.515331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.327 [2024-10-01 13:43:57.515540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.327 [2024-10-01 13:43:57.515576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:47.327 [2024-10-01 13:43:57.515597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.327 [2024-10-01 13:43:57.516343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.327 [2024-10-01 13:43:57.516375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.327 [2024-10-01 13:43:57.516522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:47.327 [2024-10-01 13:43:57.516564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.327 [2024-10-01 13:43:57.516716] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:47.327 [2024-10-01 13:43:57.516733] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:47.328 [2024-10-01 13:43:57.517048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.328 [2024-10-01 13:43:57.517224] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:47.328 [2024-10-01 13:43:57.517238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:47.328 [2024-10-01 13:43:57.517430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.586 pt2 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.586 "name": "raid_bdev1", 00:09:47.586 "uuid": "27c2acb5-b31a-4cae-af45-51072f3a11d4", 00:09:47.586 "strip_size_kb": 64, 00:09:47.586 "state": "online", 00:09:47.586 "raid_level": "raid0", 00:09:47.586 "superblock": true, 00:09:47.586 "num_base_bdevs": 2, 00:09:47.586 "num_base_bdevs_discovered": 2, 00:09:47.586 "num_base_bdevs_operational": 2, 00:09:47.586 "base_bdevs_list": [ 00:09:47.586 { 00:09:47.586 "name": "pt1", 00:09:47.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.586 "is_configured": true, 00:09:47.586 "data_offset": 2048, 00:09:47.586 "data_size": 63488 00:09:47.586 }, 00:09:47.586 { 00:09:47.586 "name": "pt2", 00:09:47.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.586 "is_configured": true, 00:09:47.586 "data_offset": 2048, 00:09:47.586 "data_size": 63488 00:09:47.586 } 00:09:47.586 ] 00:09:47.586 }' 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.586 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.845 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.845 [2024-10-01 13:43:57.983647] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.845 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.845 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.845 "name": "raid_bdev1", 00:09:47.845 "aliases": [ 00:09:47.845 "27c2acb5-b31a-4cae-af45-51072f3a11d4" 00:09:47.845 ], 00:09:47.845 "product_name": "Raid Volume", 00:09:47.845 "block_size": 512, 00:09:47.845 "num_blocks": 126976, 00:09:47.845 "uuid": "27c2acb5-b31a-4cae-af45-51072f3a11d4", 00:09:47.845 "assigned_rate_limits": { 00:09:47.845 "rw_ios_per_sec": 0, 00:09:47.845 "rw_mbytes_per_sec": 0, 00:09:47.845 "r_mbytes_per_sec": 0, 00:09:47.845 "w_mbytes_per_sec": 0 00:09:47.845 }, 00:09:47.845 "claimed": false, 00:09:47.845 "zoned": false, 00:09:47.845 "supported_io_types": { 00:09:47.845 "read": true, 00:09:47.845 "write": true, 00:09:47.845 "unmap": true, 00:09:47.845 "flush": true, 00:09:47.845 "reset": true, 00:09:47.845 "nvme_admin": false, 00:09:47.846 "nvme_io": false, 00:09:47.846 "nvme_io_md": false, 00:09:47.846 "write_zeroes": true, 00:09:47.846 "zcopy": false, 00:09:47.846 "get_zone_info": false, 00:09:47.846 "zone_management": false, 00:09:47.846 "zone_append": false, 00:09:47.846 "compare": false, 00:09:47.846 "compare_and_write": false, 00:09:47.846 "abort": false, 00:09:47.846 "seek_hole": false, 00:09:47.846 "seek_data": false, 00:09:47.846 "copy": false, 00:09:47.846 "nvme_iov_md": false 00:09:47.846 }, 00:09:47.846 "memory_domains": [ 00:09:47.846 { 00:09:47.846 "dma_device_id": "system", 00:09:47.846 "dma_device_type": 1 00:09:47.846 }, 00:09:47.846 { 00:09:47.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.846 "dma_device_type": 2 00:09:47.846 }, 00:09:47.846 { 00:09:47.846 "dma_device_id": "system", 00:09:47.846 "dma_device_type": 1 00:09:47.846 }, 00:09:47.846 { 00:09:47.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.846 "dma_device_type": 2 00:09:47.846 } 00:09:47.846 ], 00:09:47.846 "driver_specific": { 00:09:47.846 "raid": { 00:09:47.846 "uuid": "27c2acb5-b31a-4cae-af45-51072f3a11d4", 00:09:47.846 "strip_size_kb": 64, 00:09:47.846 "state": "online", 00:09:47.846 "raid_level": "raid0", 00:09:47.846 "superblock": true, 00:09:47.846 "num_base_bdevs": 2, 00:09:47.846 "num_base_bdevs_discovered": 2, 00:09:47.846 "num_base_bdevs_operational": 2, 00:09:47.846 "base_bdevs_list": [ 00:09:47.846 { 00:09:47.846 "name": "pt1", 00:09:47.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.846 "is_configured": true, 00:09:47.846 "data_offset": 2048, 00:09:47.846 "data_size": 63488 00:09:47.846 }, 00:09:47.846 { 00:09:47.846 "name": "pt2", 00:09:47.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.846 "is_configured": true, 00:09:47.846 "data_offset": 2048, 00:09:47.846 "data_size": 63488 00:09:47.846 } 00:09:47.846 ] 00:09:47.846 } 00:09:47.846 } 00:09:47.846 }' 00:09:47.846 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.105 pt2' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.105 [2024-10-01 13:43:58.203652] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 27c2acb5-b31a-4cae-af45-51072f3a11d4 '!=' 27c2acb5-b31a-4cae-af45-51072f3a11d4 ']' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61081 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61081 ']' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61081 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61081 00:09:48.105 killing process with pid 61081 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61081' 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61081 00:09:48.105 13:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61081 00:09:48.105 [2024-10-01 13:43:58.278367] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.105 [2024-10-01 13:43:58.278561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.105 [2024-10-01 13:43:58.278635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.105 [2024-10-01 13:43:58.278653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:48.363 [2024-10-01 13:43:58.513160] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.739 ************************************ 00:09:49.739 END TEST raid_superblock_test 00:09:49.739 ************************************ 00:09:49.739 13:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:49.739 00:09:49.739 real 0m4.856s 00:09:49.739 user 0m6.575s 00:09:49.739 sys 0m0.941s 00:09:49.739 13:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.739 13:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.997 13:43:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:49.997 13:43:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:49.997 13:43:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.997 13:43:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.997 ************************************ 00:09:49.997 START TEST raid_read_error_test 00:09:49.998 ************************************ 00:09:49.998 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:09:49.998 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:49.998 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:49.998 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5j3Eu0H3MQ 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61293 00:09:49.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61293 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61293 ']' 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.998 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.998 [2024-10-01 13:44:00.127678] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:49.998 [2024-10-01 13:44:00.127846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61293 ] 00:09:50.256 [2024-10-01 13:44:00.305927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.514 [2024-10-01 13:44:00.583711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.772 [2024-10-01 13:44:00.833187] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.772 [2024-10-01 13:44:00.833294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 BaseBdev1_malloc 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 true 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 [2024-10-01 13:44:01.077508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:51.062 [2024-10-01 13:44:01.078567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.062 [2024-10-01 13:44:01.078681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:51.062 [2024-10-01 13:44:01.078732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.062 [2024-10-01 13:44:01.086249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.062 [2024-10-01 13:44:01.086368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:51.062 BaseBdev1 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 BaseBdev2_malloc 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 true 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 [2024-10-01 13:44:01.163769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:51.062 [2024-10-01 13:44:01.163831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.062 [2024-10-01 13:44:01.163850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:51.062 [2024-10-01 13:44:01.163864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.062 [2024-10-01 13:44:01.166364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.062 [2024-10-01 13:44:01.166422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.062 BaseBdev2 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 [2024-10-01 13:44:01.171839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.062 [2024-10-01 13:44:01.173993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.062 [2024-10-01 13:44:01.174312] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:51.062 [2024-10-01 13:44:01.174335] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:51.062 [2024-10-01 13:44:01.174609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:51.062 [2024-10-01 13:44:01.174759] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:51.062 [2024-10-01 13:44:01.174770] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:51.062 [2024-10-01 13:44:01.174940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.062 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.062 "name": "raid_bdev1", 00:09:51.062 "uuid": "dc2fde1c-9528-4add-a19d-c38059c2c9fa", 00:09:51.062 "strip_size_kb": 64, 00:09:51.062 "state": "online", 00:09:51.062 "raid_level": "raid0", 00:09:51.062 "superblock": true, 00:09:51.062 "num_base_bdevs": 2, 00:09:51.062 "num_base_bdevs_discovered": 2, 00:09:51.062 "num_base_bdevs_operational": 2, 00:09:51.062 "base_bdevs_list": [ 00:09:51.062 { 00:09:51.062 "name": "BaseBdev1", 00:09:51.062 "uuid": "b861c968-ccb9-57fb-8770-6b584e08688a", 00:09:51.062 "is_configured": true, 00:09:51.062 "data_offset": 2048, 00:09:51.062 "data_size": 63488 00:09:51.062 }, 00:09:51.062 { 00:09:51.062 "name": "BaseBdev2", 00:09:51.062 "uuid": "18a1502a-f452-5713-a9c9-4e7ddbafe62b", 00:09:51.063 "is_configured": true, 00:09:51.063 "data_offset": 2048, 00:09:51.063 "data_size": 63488 00:09:51.063 } 00:09:51.063 ] 00:09:51.063 }' 00:09:51.063 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.063 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.644 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:51.644 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:51.644 [2024-10-01 13:44:01.704784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.577 "name": "raid_bdev1", 00:09:52.577 "uuid": "dc2fde1c-9528-4add-a19d-c38059c2c9fa", 00:09:52.577 "strip_size_kb": 64, 00:09:52.577 "state": "online", 00:09:52.577 "raid_level": "raid0", 00:09:52.577 "superblock": true, 00:09:52.577 "num_base_bdevs": 2, 00:09:52.577 "num_base_bdevs_discovered": 2, 00:09:52.577 "num_base_bdevs_operational": 2, 00:09:52.577 "base_bdevs_list": [ 00:09:52.577 { 00:09:52.577 "name": "BaseBdev1", 00:09:52.577 "uuid": "b861c968-ccb9-57fb-8770-6b584e08688a", 00:09:52.577 "is_configured": true, 00:09:52.577 "data_offset": 2048, 00:09:52.577 "data_size": 63488 00:09:52.577 }, 00:09:52.577 { 00:09:52.577 "name": "BaseBdev2", 00:09:52.577 "uuid": "18a1502a-f452-5713-a9c9-4e7ddbafe62b", 00:09:52.577 "is_configured": true, 00:09:52.577 "data_offset": 2048, 00:09:52.577 "data_size": 63488 00:09:52.577 } 00:09:52.577 ] 00:09:52.577 }' 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.577 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.144 [2024-10-01 13:44:03.064138] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.144 [2024-10-01 13:44:03.064360] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.144 [2024-10-01 13:44:03.067146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.144 [2024-10-01 13:44:03.067196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.144 [2024-10-01 13:44:03.067230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.144 [2024-10-01 13:44:03.067245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:53.144 { 00:09:53.144 "results": [ 00:09:53.144 { 00:09:53.144 "job": "raid_bdev1", 00:09:53.144 "core_mask": "0x1", 00:09:53.144 "workload": "randrw", 00:09:53.144 "percentage": 50, 00:09:53.144 "status": "finished", 00:09:53.144 "queue_depth": 1, 00:09:53.144 "io_size": 131072, 00:09:53.144 "runtime": 1.359342, 00:09:53.144 "iops": 15559.734047796655, 00:09:53.144 "mibps": 1944.966755974582, 00:09:53.144 "io_failed": 1, 00:09:53.144 "io_timeout": 0, 00:09:53.144 "avg_latency_us": 89.06457175145363, 00:09:53.144 "min_latency_us": 26.730923694779115, 00:09:53.144 "max_latency_us": 1467.3220883534136 00:09:53.144 } 00:09:53.144 ], 00:09:53.144 "core_count": 1 00:09:53.144 } 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61293 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61293 ']' 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61293 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61293 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.144 killing process with pid 61293 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61293' 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61293 00:09:53.144 [2024-10-01 13:44:03.105749] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.144 13:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61293 00:09:53.144 [2024-10-01 13:44:03.253151] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5j3Eu0H3MQ 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:54.556 00:09:54.556 real 0m4.635s 00:09:54.556 user 0m5.375s 00:09:54.556 sys 0m0.714s 00:09:54.556 ************************************ 00:09:54.556 END TEST raid_read_error_test 00:09:54.556 ************************************ 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.556 13:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.556 13:44:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:54.556 13:44:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:54.556 13:44:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.556 13:44:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.556 ************************************ 00:09:54.556 START TEST raid_write_error_test 00:09:54.556 ************************************ 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EUxBFlzgE1 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61444 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61444 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61444 ']' 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.556 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.815 [2024-10-01 13:44:04.825033] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:54.815 [2024-10-01 13:44:04.825170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61444 ] 00:09:54.815 [2024-10-01 13:44:05.000575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.073 [2024-10-01 13:44:05.229381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.331 [2024-10-01 13:44:05.448865] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.331 [2024-10-01 13:44:05.449154] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.590 BaseBdev1_malloc 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.590 true 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.590 [2024-10-01 13:44:05.760052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.590 [2024-10-01 13:44:05.760135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.590 [2024-10-01 13:44:05.760162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.590 [2024-10-01 13:44:05.760180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.590 [2024-10-01 13:44:05.762899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.590 [2024-10-01 13:44:05.762949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.590 BaseBdev1 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.590 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.849 BaseBdev2_malloc 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.849 true 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.849 [2024-10-01 13:44:05.835583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.849 [2024-10-01 13:44:05.835656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.849 [2024-10-01 13:44:05.835679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.849 [2024-10-01 13:44:05.835694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.849 [2024-10-01 13:44:05.838298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.849 [2024-10-01 13:44:05.838353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.849 BaseBdev2 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.849 [2024-10-01 13:44:05.843673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.849 [2024-10-01 13:44:05.845899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.849 [2024-10-01 13:44:05.846307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.849 [2024-10-01 13:44:05.846331] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:55.849 [2024-10-01 13:44:05.846637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:55.849 [2024-10-01 13:44:05.846805] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.849 [2024-10-01 13:44:05.846816] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:55.849 [2024-10-01 13:44:05.847011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.849 "name": "raid_bdev1", 00:09:55.849 "uuid": "844880fc-c98a-4560-b10b-fd51ba41418c", 00:09:55.849 "strip_size_kb": 64, 00:09:55.849 "state": "online", 00:09:55.849 "raid_level": "raid0", 00:09:55.849 "superblock": true, 00:09:55.849 "num_base_bdevs": 2, 00:09:55.849 "num_base_bdevs_discovered": 2, 00:09:55.849 "num_base_bdevs_operational": 2, 00:09:55.849 "base_bdevs_list": [ 00:09:55.849 { 00:09:55.849 "name": "BaseBdev1", 00:09:55.849 "uuid": "564df977-8563-58a8-b3c0-e78af4711dc4", 00:09:55.849 "is_configured": true, 00:09:55.849 "data_offset": 2048, 00:09:55.849 "data_size": 63488 00:09:55.849 }, 00:09:55.849 { 00:09:55.849 "name": "BaseBdev2", 00:09:55.849 "uuid": "139a0f2b-9516-5397-98e0-a1b64013cfeb", 00:09:55.849 "is_configured": true, 00:09:55.849 "data_offset": 2048, 00:09:55.849 "data_size": 63488 00:09:55.849 } 00:09:55.849 ] 00:09:55.849 }' 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.849 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.108 13:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.108 13:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.366 [2024-10-01 13:44:06.376810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.302 "name": "raid_bdev1", 00:09:57.302 "uuid": "844880fc-c98a-4560-b10b-fd51ba41418c", 00:09:57.302 "strip_size_kb": 64, 00:09:57.302 "state": "online", 00:09:57.302 "raid_level": "raid0", 00:09:57.302 "superblock": true, 00:09:57.302 "num_base_bdevs": 2, 00:09:57.302 "num_base_bdevs_discovered": 2, 00:09:57.302 "num_base_bdevs_operational": 2, 00:09:57.302 "base_bdevs_list": [ 00:09:57.302 { 00:09:57.302 "name": "BaseBdev1", 00:09:57.302 "uuid": "564df977-8563-58a8-b3c0-e78af4711dc4", 00:09:57.302 "is_configured": true, 00:09:57.302 "data_offset": 2048, 00:09:57.302 "data_size": 63488 00:09:57.302 }, 00:09:57.302 { 00:09:57.302 "name": "BaseBdev2", 00:09:57.302 "uuid": "139a0f2b-9516-5397-98e0-a1b64013cfeb", 00:09:57.302 "is_configured": true, 00:09:57.302 "data_offset": 2048, 00:09:57.302 "data_size": 63488 00:09:57.302 } 00:09:57.302 ] 00:09:57.302 }' 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.302 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.563 [2024-10-01 13:44:07.691621] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.563 [2024-10-01 13:44:07.691663] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.563 [2024-10-01 13:44:07.695020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.563 [2024-10-01 13:44:07.695220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.563 [2024-10-01 13:44:07.695384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.563 [2024-10-01 13:44:07.695522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:57.563 { 00:09:57.563 "results": [ 00:09:57.563 { 00:09:57.563 "job": "raid_bdev1", 00:09:57.563 "core_mask": "0x1", 00:09:57.563 "workload": "randrw", 00:09:57.563 "percentage": 50, 00:09:57.563 "status": "finished", 00:09:57.563 "queue_depth": 1, 00:09:57.563 "io_size": 131072, 00:09:57.563 "runtime": 1.314638, 00:09:57.563 "iops": 15532.793057860796, 00:09:57.563 "mibps": 1941.5991322325995, 00:09:57.563 "io_failed": 1, 00:09:57.563 "io_timeout": 0, 00:09:57.563 "avg_latency_us": 89.2509044453609, 00:09:57.563 "min_latency_us": 26.730923694779115, 00:09:57.563 "max_latency_us": 1473.9020080321286 00:09:57.563 } 00:09:57.563 ], 00:09:57.563 "core_count": 1 00:09:57.563 } 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61444 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61444 ']' 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61444 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61444 00:09:57.563 killing process with pid 61444 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61444' 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61444 00:09:57.563 [2024-10-01 13:44:07.742360] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.563 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61444 00:09:57.822 [2024-10-01 13:44:07.883049] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EUxBFlzgE1 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:59.202 00:09:59.202 real 0m4.586s 00:09:59.202 user 0m5.376s 00:09:59.202 sys 0m0.613s 00:09:59.202 ************************************ 00:09:59.202 END TEST raid_write_error_test 00:09:59.202 ************************************ 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.202 13:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.202 13:44:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:59.202 13:44:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:59.202 13:44:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:59.202 13:44:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.202 13:44:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.202 ************************************ 00:09:59.202 START TEST raid_state_function_test 00:09:59.202 ************************************ 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.202 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61582 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61582' 00:09:59.203 Process raid pid: 61582 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61582 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61582 ']' 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.203 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.461 [2024-10-01 13:44:09.481665] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:59.461 [2024-10-01 13:44:09.481863] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.721 [2024-10-01 13:44:09.683456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.721 [2024-10-01 13:44:09.911608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.980 [2024-10-01 13:44:10.137730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.980 [2024-10-01 13:44:10.137773] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.238 [2024-10-01 13:44:10.340348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.238 [2024-10-01 13:44:10.340421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.238 [2024-10-01 13:44:10.340437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.238 [2024-10-01 13:44:10.340452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.238 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.238 "name": "Existed_Raid", 00:10:00.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.238 "strip_size_kb": 64, 00:10:00.238 "state": "configuring", 00:10:00.238 "raid_level": "concat", 00:10:00.238 "superblock": false, 00:10:00.238 "num_base_bdevs": 2, 00:10:00.238 "num_base_bdevs_discovered": 0, 00:10:00.238 "num_base_bdevs_operational": 2, 00:10:00.238 "base_bdevs_list": [ 00:10:00.238 { 00:10:00.238 "name": "BaseBdev1", 00:10:00.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.238 "is_configured": false, 00:10:00.238 "data_offset": 0, 00:10:00.238 "data_size": 0 00:10:00.238 }, 00:10:00.238 { 00:10:00.238 "name": "BaseBdev2", 00:10:00.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.238 "is_configured": false, 00:10:00.238 "data_offset": 0, 00:10:00.238 "data_size": 0 00:10:00.238 } 00:10:00.238 ] 00:10:00.239 }' 00:10:00.239 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.239 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.808 [2024-10-01 13:44:10.811558] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.808 [2024-10-01 13:44:10.811736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.808 [2024-10-01 13:44:10.823584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.808 [2024-10-01 13:44:10.823636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.808 [2024-10-01 13:44:10.823647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.808 [2024-10-01 13:44:10.823664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.808 [2024-10-01 13:44:10.880742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.808 BaseBdev1 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.808 [ 00:10:00.808 { 00:10:00.808 "name": "BaseBdev1", 00:10:00.808 "aliases": [ 00:10:00.808 "73f9050f-8c69-4db9-8ce6-c91d8c304344" 00:10:00.808 ], 00:10:00.808 "product_name": "Malloc disk", 00:10:00.808 "block_size": 512, 00:10:00.808 "num_blocks": 65536, 00:10:00.808 "uuid": "73f9050f-8c69-4db9-8ce6-c91d8c304344", 00:10:00.808 "assigned_rate_limits": { 00:10:00.808 "rw_ios_per_sec": 0, 00:10:00.808 "rw_mbytes_per_sec": 0, 00:10:00.808 "r_mbytes_per_sec": 0, 00:10:00.808 "w_mbytes_per_sec": 0 00:10:00.808 }, 00:10:00.808 "claimed": true, 00:10:00.808 "claim_type": "exclusive_write", 00:10:00.808 "zoned": false, 00:10:00.808 "supported_io_types": { 00:10:00.808 "read": true, 00:10:00.808 "write": true, 00:10:00.808 "unmap": true, 00:10:00.808 "flush": true, 00:10:00.808 "reset": true, 00:10:00.808 "nvme_admin": false, 00:10:00.808 "nvme_io": false, 00:10:00.808 "nvme_io_md": false, 00:10:00.808 "write_zeroes": true, 00:10:00.808 "zcopy": true, 00:10:00.808 "get_zone_info": false, 00:10:00.808 "zone_management": false, 00:10:00.808 "zone_append": false, 00:10:00.808 "compare": false, 00:10:00.808 "compare_and_write": false, 00:10:00.808 "abort": true, 00:10:00.808 "seek_hole": false, 00:10:00.808 "seek_data": false, 00:10:00.808 "copy": true, 00:10:00.808 "nvme_iov_md": false 00:10:00.808 }, 00:10:00.808 "memory_domains": [ 00:10:00.808 { 00:10:00.808 "dma_device_id": "system", 00:10:00.808 "dma_device_type": 1 00:10:00.808 }, 00:10:00.808 { 00:10:00.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.808 "dma_device_type": 2 00:10:00.808 } 00:10:00.808 ], 00:10:00.808 "driver_specific": {} 00:10:00.808 } 00:10:00.808 ] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.808 "name": "Existed_Raid", 00:10:00.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.808 "strip_size_kb": 64, 00:10:00.808 "state": "configuring", 00:10:00.808 "raid_level": "concat", 00:10:00.808 "superblock": false, 00:10:00.808 "num_base_bdevs": 2, 00:10:00.808 "num_base_bdevs_discovered": 1, 00:10:00.808 "num_base_bdevs_operational": 2, 00:10:00.808 "base_bdevs_list": [ 00:10:00.808 { 00:10:00.808 "name": "BaseBdev1", 00:10:00.808 "uuid": "73f9050f-8c69-4db9-8ce6-c91d8c304344", 00:10:00.808 "is_configured": true, 00:10:00.808 "data_offset": 0, 00:10:00.808 "data_size": 65536 00:10:00.808 }, 00:10:00.808 { 00:10:00.808 "name": "BaseBdev2", 00:10:00.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.808 "is_configured": false, 00:10:00.808 "data_offset": 0, 00:10:00.808 "data_size": 0 00:10:00.808 } 00:10:00.808 ] 00:10:00.808 }' 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.808 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.378 [2024-10-01 13:44:11.368121] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.378 [2024-10-01 13:44:11.368180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.378 [2024-10-01 13:44:11.380138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.378 [2024-10-01 13:44:11.382292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.378 [2024-10-01 13:44:11.382344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.378 "name": "Existed_Raid", 00:10:01.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.378 "strip_size_kb": 64, 00:10:01.378 "state": "configuring", 00:10:01.378 "raid_level": "concat", 00:10:01.378 "superblock": false, 00:10:01.378 "num_base_bdevs": 2, 00:10:01.378 "num_base_bdevs_discovered": 1, 00:10:01.378 "num_base_bdevs_operational": 2, 00:10:01.378 "base_bdevs_list": [ 00:10:01.378 { 00:10:01.378 "name": "BaseBdev1", 00:10:01.378 "uuid": "73f9050f-8c69-4db9-8ce6-c91d8c304344", 00:10:01.378 "is_configured": true, 00:10:01.378 "data_offset": 0, 00:10:01.378 "data_size": 65536 00:10:01.378 }, 00:10:01.378 { 00:10:01.378 "name": "BaseBdev2", 00:10:01.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.378 "is_configured": false, 00:10:01.378 "data_offset": 0, 00:10:01.378 "data_size": 0 00:10:01.378 } 00:10:01.378 ] 00:10:01.378 }' 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.378 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.945 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.945 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.945 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.945 [2024-10-01 13:44:11.887305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.945 [2024-10-01 13:44:11.887367] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:01.945 [2024-10-01 13:44:11.887377] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:01.945 [2024-10-01 13:44:11.887692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:01.945 [2024-10-01 13:44:11.887872] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:01.945 [2024-10-01 13:44:11.887886] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:01.945 [2024-10-01 13:44:11.888152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.945 BaseBdev2 00:10:01.945 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.946 [ 00:10:01.946 { 00:10:01.946 "name": "BaseBdev2", 00:10:01.946 "aliases": [ 00:10:01.946 "7943052c-624f-483b-96ce-ac1a5aeccd29" 00:10:01.946 ], 00:10:01.946 "product_name": "Malloc disk", 00:10:01.946 "block_size": 512, 00:10:01.946 "num_blocks": 65536, 00:10:01.946 "uuid": "7943052c-624f-483b-96ce-ac1a5aeccd29", 00:10:01.946 "assigned_rate_limits": { 00:10:01.946 "rw_ios_per_sec": 0, 00:10:01.946 "rw_mbytes_per_sec": 0, 00:10:01.946 "r_mbytes_per_sec": 0, 00:10:01.946 "w_mbytes_per_sec": 0 00:10:01.946 }, 00:10:01.946 "claimed": true, 00:10:01.946 "claim_type": "exclusive_write", 00:10:01.946 "zoned": false, 00:10:01.946 "supported_io_types": { 00:10:01.946 "read": true, 00:10:01.946 "write": true, 00:10:01.946 "unmap": true, 00:10:01.946 "flush": true, 00:10:01.946 "reset": true, 00:10:01.946 "nvme_admin": false, 00:10:01.946 "nvme_io": false, 00:10:01.946 "nvme_io_md": false, 00:10:01.946 "write_zeroes": true, 00:10:01.946 "zcopy": true, 00:10:01.946 "get_zone_info": false, 00:10:01.946 "zone_management": false, 00:10:01.946 "zone_append": false, 00:10:01.946 "compare": false, 00:10:01.946 "compare_and_write": false, 00:10:01.946 "abort": true, 00:10:01.946 "seek_hole": false, 00:10:01.946 "seek_data": false, 00:10:01.946 "copy": true, 00:10:01.946 "nvme_iov_md": false 00:10:01.946 }, 00:10:01.946 "memory_domains": [ 00:10:01.946 { 00:10:01.946 "dma_device_id": "system", 00:10:01.946 "dma_device_type": 1 00:10:01.946 }, 00:10:01.946 { 00:10:01.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.946 "dma_device_type": 2 00:10:01.946 } 00:10:01.946 ], 00:10:01.946 "driver_specific": {} 00:10:01.946 } 00:10:01.946 ] 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.946 "name": "Existed_Raid", 00:10:01.946 "uuid": "f44bb15e-66c8-495e-9444-3cb307818503", 00:10:01.946 "strip_size_kb": 64, 00:10:01.946 "state": "online", 00:10:01.946 "raid_level": "concat", 00:10:01.946 "superblock": false, 00:10:01.946 "num_base_bdevs": 2, 00:10:01.946 "num_base_bdevs_discovered": 2, 00:10:01.946 "num_base_bdevs_operational": 2, 00:10:01.946 "base_bdevs_list": [ 00:10:01.946 { 00:10:01.946 "name": "BaseBdev1", 00:10:01.946 "uuid": "73f9050f-8c69-4db9-8ce6-c91d8c304344", 00:10:01.946 "is_configured": true, 00:10:01.946 "data_offset": 0, 00:10:01.946 "data_size": 65536 00:10:01.946 }, 00:10:01.946 { 00:10:01.946 "name": "BaseBdev2", 00:10:01.946 "uuid": "7943052c-624f-483b-96ce-ac1a5aeccd29", 00:10:01.946 "is_configured": true, 00:10:01.946 "data_offset": 0, 00:10:01.946 "data_size": 65536 00:10:01.946 } 00:10:01.946 ] 00:10:01.946 }' 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.946 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.205 [2024-10-01 13:44:12.327696] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.205 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.205 "name": "Existed_Raid", 00:10:02.205 "aliases": [ 00:10:02.205 "f44bb15e-66c8-495e-9444-3cb307818503" 00:10:02.205 ], 00:10:02.205 "product_name": "Raid Volume", 00:10:02.205 "block_size": 512, 00:10:02.205 "num_blocks": 131072, 00:10:02.205 "uuid": "f44bb15e-66c8-495e-9444-3cb307818503", 00:10:02.205 "assigned_rate_limits": { 00:10:02.205 "rw_ios_per_sec": 0, 00:10:02.205 "rw_mbytes_per_sec": 0, 00:10:02.205 "r_mbytes_per_sec": 0, 00:10:02.205 "w_mbytes_per_sec": 0 00:10:02.205 }, 00:10:02.205 "claimed": false, 00:10:02.205 "zoned": false, 00:10:02.205 "supported_io_types": { 00:10:02.205 "read": true, 00:10:02.205 "write": true, 00:10:02.205 "unmap": true, 00:10:02.205 "flush": true, 00:10:02.205 "reset": true, 00:10:02.205 "nvme_admin": false, 00:10:02.205 "nvme_io": false, 00:10:02.205 "nvme_io_md": false, 00:10:02.205 "write_zeroes": true, 00:10:02.205 "zcopy": false, 00:10:02.205 "get_zone_info": false, 00:10:02.205 "zone_management": false, 00:10:02.205 "zone_append": false, 00:10:02.205 "compare": false, 00:10:02.205 "compare_and_write": false, 00:10:02.205 "abort": false, 00:10:02.205 "seek_hole": false, 00:10:02.205 "seek_data": false, 00:10:02.205 "copy": false, 00:10:02.206 "nvme_iov_md": false 00:10:02.206 }, 00:10:02.206 "memory_domains": [ 00:10:02.206 { 00:10:02.206 "dma_device_id": "system", 00:10:02.206 "dma_device_type": 1 00:10:02.206 }, 00:10:02.206 { 00:10:02.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.206 "dma_device_type": 2 00:10:02.206 }, 00:10:02.206 { 00:10:02.206 "dma_device_id": "system", 00:10:02.206 "dma_device_type": 1 00:10:02.206 }, 00:10:02.206 { 00:10:02.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.206 "dma_device_type": 2 00:10:02.206 } 00:10:02.206 ], 00:10:02.206 "driver_specific": { 00:10:02.206 "raid": { 00:10:02.206 "uuid": "f44bb15e-66c8-495e-9444-3cb307818503", 00:10:02.206 "strip_size_kb": 64, 00:10:02.206 "state": "online", 00:10:02.206 "raid_level": "concat", 00:10:02.206 "superblock": false, 00:10:02.206 "num_base_bdevs": 2, 00:10:02.206 "num_base_bdevs_discovered": 2, 00:10:02.206 "num_base_bdevs_operational": 2, 00:10:02.206 "base_bdevs_list": [ 00:10:02.206 { 00:10:02.206 "name": "BaseBdev1", 00:10:02.206 "uuid": "73f9050f-8c69-4db9-8ce6-c91d8c304344", 00:10:02.206 "is_configured": true, 00:10:02.206 "data_offset": 0, 00:10:02.206 "data_size": 65536 00:10:02.206 }, 00:10:02.206 { 00:10:02.206 "name": "BaseBdev2", 00:10:02.206 "uuid": "7943052c-624f-483b-96ce-ac1a5aeccd29", 00:10:02.206 "is_configured": true, 00:10:02.206 "data_offset": 0, 00:10:02.206 "data_size": 65536 00:10:02.206 } 00:10:02.206 ] 00:10:02.206 } 00:10:02.206 } 00:10:02.206 }' 00:10:02.206 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:02.465 BaseBdev2' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.465 [2024-10-01 13:44:12.543329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.465 [2024-10-01 13:44:12.543369] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.465 [2024-10-01 13:44:12.543443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.465 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.725 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.725 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.725 "name": "Existed_Raid", 00:10:02.725 "uuid": "f44bb15e-66c8-495e-9444-3cb307818503", 00:10:02.725 "strip_size_kb": 64, 00:10:02.725 "state": "offline", 00:10:02.725 "raid_level": "concat", 00:10:02.725 "superblock": false, 00:10:02.725 "num_base_bdevs": 2, 00:10:02.725 "num_base_bdevs_discovered": 1, 00:10:02.725 "num_base_bdevs_operational": 1, 00:10:02.725 "base_bdevs_list": [ 00:10:02.725 { 00:10:02.725 "name": null, 00:10:02.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.725 "is_configured": false, 00:10:02.725 "data_offset": 0, 00:10:02.725 "data_size": 65536 00:10:02.725 }, 00:10:02.725 { 00:10:02.725 "name": "BaseBdev2", 00:10:02.725 "uuid": "7943052c-624f-483b-96ce-ac1a5aeccd29", 00:10:02.725 "is_configured": true, 00:10:02.725 "data_offset": 0, 00:10:02.725 "data_size": 65536 00:10:02.725 } 00:10:02.725 ] 00:10:02.725 }' 00:10:02.725 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.725 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.021 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.021 [2024-10-01 13:44:13.135313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.021 [2024-10-01 13:44:13.135508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61582 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61582 ']' 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61582 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61582 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.280 killing process with pid 61582 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61582' 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61582 00:10:03.280 [2024-10-01 13:44:13.325335] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.280 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61582 00:10:03.280 [2024-10-01 13:44:13.342314] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.658 00:10:04.658 real 0m5.260s 00:10:04.658 user 0m7.446s 00:10:04.658 sys 0m0.933s 00:10:04.658 ************************************ 00:10:04.658 END TEST raid_state_function_test 00:10:04.658 ************************************ 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.658 13:44:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:04.658 13:44:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:04.658 13:44:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.658 13:44:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.658 ************************************ 00:10:04.658 START TEST raid_state_function_test_sb 00:10:04.658 ************************************ 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61835 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61835' 00:10:04.658 Process raid pid: 61835 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61835 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61835 ']' 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.658 13:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.658 [2024-10-01 13:44:14.803636] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:04.658 [2024-10-01 13:44:14.803769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.917 [2024-10-01 13:44:14.977201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.177 [2024-10-01 13:44:15.253866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.435 [2024-10-01 13:44:15.505027] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.435 [2024-10-01 13:44:15.505072] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.694 [2024-10-01 13:44:15.702967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.694 [2024-10-01 13:44:15.703022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.694 [2024-10-01 13:44:15.703038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.694 [2024-10-01 13:44:15.703051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.694 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.694 "name": "Existed_Raid", 00:10:05.695 "uuid": "f4219aed-4db2-492f-86c4-24912ec480d9", 00:10:05.695 "strip_size_kb": 64, 00:10:05.695 "state": "configuring", 00:10:05.695 "raid_level": "concat", 00:10:05.695 "superblock": true, 00:10:05.695 "num_base_bdevs": 2, 00:10:05.695 "num_base_bdevs_discovered": 0, 00:10:05.695 "num_base_bdevs_operational": 2, 00:10:05.695 "base_bdevs_list": [ 00:10:05.695 { 00:10:05.695 "name": "BaseBdev1", 00:10:05.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.695 "is_configured": false, 00:10:05.695 "data_offset": 0, 00:10:05.695 "data_size": 0 00:10:05.695 }, 00:10:05.695 { 00:10:05.695 "name": "BaseBdev2", 00:10:05.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.695 "is_configured": false, 00:10:05.695 "data_offset": 0, 00:10:05.695 "data_size": 0 00:10:05.695 } 00:10:05.695 ] 00:10:05.695 }' 00:10:05.695 13:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.695 13:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.954 [2024-10-01 13:44:16.082417] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.954 [2024-10-01 13:44:16.082457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.954 [2024-10-01 13:44:16.094430] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.954 [2024-10-01 13:44:16.094481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.954 [2024-10-01 13:44:16.094491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.954 [2024-10-01 13:44:16.094507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.954 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.213 [2024-10-01 13:44:16.152630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.213 BaseBdev1 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.213 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.213 [ 00:10:06.213 { 00:10:06.213 "name": "BaseBdev1", 00:10:06.213 "aliases": [ 00:10:06.213 "bd9a469d-308f-4398-b6c7-7e52098f40fc" 00:10:06.213 ], 00:10:06.213 "product_name": "Malloc disk", 00:10:06.213 "block_size": 512, 00:10:06.213 "num_blocks": 65536, 00:10:06.213 "uuid": "bd9a469d-308f-4398-b6c7-7e52098f40fc", 00:10:06.213 "assigned_rate_limits": { 00:10:06.213 "rw_ios_per_sec": 0, 00:10:06.213 "rw_mbytes_per_sec": 0, 00:10:06.213 "r_mbytes_per_sec": 0, 00:10:06.213 "w_mbytes_per_sec": 0 00:10:06.213 }, 00:10:06.213 "claimed": true, 00:10:06.213 "claim_type": "exclusive_write", 00:10:06.213 "zoned": false, 00:10:06.213 "supported_io_types": { 00:10:06.213 "read": true, 00:10:06.213 "write": true, 00:10:06.213 "unmap": true, 00:10:06.213 "flush": true, 00:10:06.213 "reset": true, 00:10:06.213 "nvme_admin": false, 00:10:06.213 "nvme_io": false, 00:10:06.213 "nvme_io_md": false, 00:10:06.213 "write_zeroes": true, 00:10:06.213 "zcopy": true, 00:10:06.213 "get_zone_info": false, 00:10:06.213 "zone_management": false, 00:10:06.213 "zone_append": false, 00:10:06.213 "compare": false, 00:10:06.213 "compare_and_write": false, 00:10:06.213 "abort": true, 00:10:06.213 "seek_hole": false, 00:10:06.213 "seek_data": false, 00:10:06.213 "copy": true, 00:10:06.213 "nvme_iov_md": false 00:10:06.213 }, 00:10:06.213 "memory_domains": [ 00:10:06.213 { 00:10:06.213 "dma_device_id": "system", 00:10:06.214 "dma_device_type": 1 00:10:06.214 }, 00:10:06.214 { 00:10:06.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.214 "dma_device_type": 2 00:10:06.214 } 00:10:06.214 ], 00:10:06.214 "driver_specific": {} 00:10:06.214 } 00:10:06.214 ] 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.214 "name": "Existed_Raid", 00:10:06.214 "uuid": "70c67817-4d1e-4fa9-8645-26b5c10fd473", 00:10:06.214 "strip_size_kb": 64, 00:10:06.214 "state": "configuring", 00:10:06.214 "raid_level": "concat", 00:10:06.214 "superblock": true, 00:10:06.214 "num_base_bdevs": 2, 00:10:06.214 "num_base_bdevs_discovered": 1, 00:10:06.214 "num_base_bdevs_operational": 2, 00:10:06.214 "base_bdevs_list": [ 00:10:06.214 { 00:10:06.214 "name": "BaseBdev1", 00:10:06.214 "uuid": "bd9a469d-308f-4398-b6c7-7e52098f40fc", 00:10:06.214 "is_configured": true, 00:10:06.214 "data_offset": 2048, 00:10:06.214 "data_size": 63488 00:10:06.214 }, 00:10:06.214 { 00:10:06.214 "name": "BaseBdev2", 00:10:06.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.214 "is_configured": false, 00:10:06.214 "data_offset": 0, 00:10:06.214 "data_size": 0 00:10:06.214 } 00:10:06.214 ] 00:10:06.214 }' 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.214 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.472 [2024-10-01 13:44:16.624026] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.472 [2024-10-01 13:44:16.624093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.472 [2024-10-01 13:44:16.636073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.472 [2024-10-01 13:44:16.638288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.472 [2024-10-01 13:44:16.638354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.472 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.732 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.732 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.732 "name": "Existed_Raid", 00:10:06.732 "uuid": "a67ea7cf-e7a3-4885-a2b9-4c1e8b582539", 00:10:06.732 "strip_size_kb": 64, 00:10:06.732 "state": "configuring", 00:10:06.732 "raid_level": "concat", 00:10:06.732 "superblock": true, 00:10:06.732 "num_base_bdevs": 2, 00:10:06.732 "num_base_bdevs_discovered": 1, 00:10:06.732 "num_base_bdevs_operational": 2, 00:10:06.732 "base_bdevs_list": [ 00:10:06.732 { 00:10:06.732 "name": "BaseBdev1", 00:10:06.732 "uuid": "bd9a469d-308f-4398-b6c7-7e52098f40fc", 00:10:06.732 "is_configured": true, 00:10:06.732 "data_offset": 2048, 00:10:06.732 "data_size": 63488 00:10:06.732 }, 00:10:06.732 { 00:10:06.732 "name": "BaseBdev2", 00:10:06.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.732 "is_configured": false, 00:10:06.732 "data_offset": 0, 00:10:06.732 "data_size": 0 00:10:06.732 } 00:10:06.732 ] 00:10:06.732 }' 00:10:06.732 13:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.732 13:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.990 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.990 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.990 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.990 [2024-10-01 13:44:17.079843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.990 [2024-10-01 13:44:17.080117] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:06.990 [2024-10-01 13:44:17.080133] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:06.990 [2024-10-01 13:44:17.080496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:06.991 [2024-10-01 13:44:17.080643] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:06.991 [2024-10-01 13:44:17.080662] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:06.991 BaseBdev2 00:10:06.991 [2024-10-01 13:44:17.080804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.991 [ 00:10:06.991 { 00:10:06.991 "name": "BaseBdev2", 00:10:06.991 "aliases": [ 00:10:06.991 "781556c4-f0a2-4b49-a078-c54c12464f98" 00:10:06.991 ], 00:10:06.991 "product_name": "Malloc disk", 00:10:06.991 "block_size": 512, 00:10:06.991 "num_blocks": 65536, 00:10:06.991 "uuid": "781556c4-f0a2-4b49-a078-c54c12464f98", 00:10:06.991 "assigned_rate_limits": { 00:10:06.991 "rw_ios_per_sec": 0, 00:10:06.991 "rw_mbytes_per_sec": 0, 00:10:06.991 "r_mbytes_per_sec": 0, 00:10:06.991 "w_mbytes_per_sec": 0 00:10:06.991 }, 00:10:06.991 "claimed": true, 00:10:06.991 "claim_type": "exclusive_write", 00:10:06.991 "zoned": false, 00:10:06.991 "supported_io_types": { 00:10:06.991 "read": true, 00:10:06.991 "write": true, 00:10:06.991 "unmap": true, 00:10:06.991 "flush": true, 00:10:06.991 "reset": true, 00:10:06.991 "nvme_admin": false, 00:10:06.991 "nvme_io": false, 00:10:06.991 "nvme_io_md": false, 00:10:06.991 "write_zeroes": true, 00:10:06.991 "zcopy": true, 00:10:06.991 "get_zone_info": false, 00:10:06.991 "zone_management": false, 00:10:06.991 "zone_append": false, 00:10:06.991 "compare": false, 00:10:06.991 "compare_and_write": false, 00:10:06.991 "abort": true, 00:10:06.991 "seek_hole": false, 00:10:06.991 "seek_data": false, 00:10:06.991 "copy": true, 00:10:06.991 "nvme_iov_md": false 00:10:06.991 }, 00:10:06.991 "memory_domains": [ 00:10:06.991 { 00:10:06.991 "dma_device_id": "system", 00:10:06.991 "dma_device_type": 1 00:10:06.991 }, 00:10:06.991 { 00:10:06.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.991 "dma_device_type": 2 00:10:06.991 } 00:10:06.991 ], 00:10:06.991 "driver_specific": {} 00:10:06.991 } 00:10:06.991 ] 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.991 "name": "Existed_Raid", 00:10:06.991 "uuid": "a67ea7cf-e7a3-4885-a2b9-4c1e8b582539", 00:10:06.991 "strip_size_kb": 64, 00:10:06.991 "state": "online", 00:10:06.991 "raid_level": "concat", 00:10:06.991 "superblock": true, 00:10:06.991 "num_base_bdevs": 2, 00:10:06.991 "num_base_bdevs_discovered": 2, 00:10:06.991 "num_base_bdevs_operational": 2, 00:10:06.991 "base_bdevs_list": [ 00:10:06.991 { 00:10:06.991 "name": "BaseBdev1", 00:10:06.991 "uuid": "bd9a469d-308f-4398-b6c7-7e52098f40fc", 00:10:06.991 "is_configured": true, 00:10:06.991 "data_offset": 2048, 00:10:06.991 "data_size": 63488 00:10:06.991 }, 00:10:06.991 { 00:10:06.991 "name": "BaseBdev2", 00:10:06.991 "uuid": "781556c4-f0a2-4b49-a078-c54c12464f98", 00:10:06.991 "is_configured": true, 00:10:06.991 "data_offset": 2048, 00:10:06.991 "data_size": 63488 00:10:06.991 } 00:10:06.991 ] 00:10:06.991 }' 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.991 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.560 [2024-10-01 13:44:17.576740] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.560 "name": "Existed_Raid", 00:10:07.560 "aliases": [ 00:10:07.560 "a67ea7cf-e7a3-4885-a2b9-4c1e8b582539" 00:10:07.560 ], 00:10:07.560 "product_name": "Raid Volume", 00:10:07.560 "block_size": 512, 00:10:07.560 "num_blocks": 126976, 00:10:07.560 "uuid": "a67ea7cf-e7a3-4885-a2b9-4c1e8b582539", 00:10:07.560 "assigned_rate_limits": { 00:10:07.560 "rw_ios_per_sec": 0, 00:10:07.560 "rw_mbytes_per_sec": 0, 00:10:07.560 "r_mbytes_per_sec": 0, 00:10:07.560 "w_mbytes_per_sec": 0 00:10:07.560 }, 00:10:07.560 "claimed": false, 00:10:07.560 "zoned": false, 00:10:07.560 "supported_io_types": { 00:10:07.560 "read": true, 00:10:07.560 "write": true, 00:10:07.560 "unmap": true, 00:10:07.560 "flush": true, 00:10:07.560 "reset": true, 00:10:07.560 "nvme_admin": false, 00:10:07.560 "nvme_io": false, 00:10:07.560 "nvme_io_md": false, 00:10:07.560 "write_zeroes": true, 00:10:07.560 "zcopy": false, 00:10:07.560 "get_zone_info": false, 00:10:07.560 "zone_management": false, 00:10:07.560 "zone_append": false, 00:10:07.560 "compare": false, 00:10:07.560 "compare_and_write": false, 00:10:07.560 "abort": false, 00:10:07.560 "seek_hole": false, 00:10:07.560 "seek_data": false, 00:10:07.560 "copy": false, 00:10:07.560 "nvme_iov_md": false 00:10:07.560 }, 00:10:07.560 "memory_domains": [ 00:10:07.560 { 00:10:07.560 "dma_device_id": "system", 00:10:07.560 "dma_device_type": 1 00:10:07.560 }, 00:10:07.560 { 00:10:07.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.560 "dma_device_type": 2 00:10:07.560 }, 00:10:07.560 { 00:10:07.560 "dma_device_id": "system", 00:10:07.560 "dma_device_type": 1 00:10:07.560 }, 00:10:07.560 { 00:10:07.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.560 "dma_device_type": 2 00:10:07.560 } 00:10:07.560 ], 00:10:07.560 "driver_specific": { 00:10:07.560 "raid": { 00:10:07.560 "uuid": "a67ea7cf-e7a3-4885-a2b9-4c1e8b582539", 00:10:07.560 "strip_size_kb": 64, 00:10:07.560 "state": "online", 00:10:07.560 "raid_level": "concat", 00:10:07.560 "superblock": true, 00:10:07.560 "num_base_bdevs": 2, 00:10:07.560 "num_base_bdevs_discovered": 2, 00:10:07.560 "num_base_bdevs_operational": 2, 00:10:07.560 "base_bdevs_list": [ 00:10:07.560 { 00:10:07.560 "name": "BaseBdev1", 00:10:07.560 "uuid": "bd9a469d-308f-4398-b6c7-7e52098f40fc", 00:10:07.560 "is_configured": true, 00:10:07.560 "data_offset": 2048, 00:10:07.560 "data_size": 63488 00:10:07.560 }, 00:10:07.560 { 00:10:07.560 "name": "BaseBdev2", 00:10:07.560 "uuid": "781556c4-f0a2-4b49-a078-c54c12464f98", 00:10:07.560 "is_configured": true, 00:10:07.560 "data_offset": 2048, 00:10:07.560 "data_size": 63488 00:10:07.560 } 00:10:07.560 ] 00:10:07.560 } 00:10:07.560 } 00:10:07.560 }' 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.560 BaseBdev2' 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.560 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.561 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 [2024-10-01 13:44:17.792143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.821 [2024-10-01 13:44:17.792183] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.821 [2024-10-01 13:44:17.792238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.821 "name": "Existed_Raid", 00:10:07.821 "uuid": "a67ea7cf-e7a3-4885-a2b9-4c1e8b582539", 00:10:07.821 "strip_size_kb": 64, 00:10:07.821 "state": "offline", 00:10:07.821 "raid_level": "concat", 00:10:07.821 "superblock": true, 00:10:07.821 "num_base_bdevs": 2, 00:10:07.821 "num_base_bdevs_discovered": 1, 00:10:07.821 "num_base_bdevs_operational": 1, 00:10:07.821 "base_bdevs_list": [ 00:10:07.821 { 00:10:07.821 "name": null, 00:10:07.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.821 "is_configured": false, 00:10:07.821 "data_offset": 0, 00:10:07.821 "data_size": 63488 00:10:07.821 }, 00:10:07.821 { 00:10:07.821 "name": "BaseBdev2", 00:10:07.821 "uuid": "781556c4-f0a2-4b49-a078-c54c12464f98", 00:10:07.821 "is_configured": true, 00:10:07.821 "data_offset": 2048, 00:10:07.821 "data_size": 63488 00:10:07.821 } 00:10:07.821 ] 00:10:07.821 }' 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.821 13:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.390 [2024-10-01 13:44:18.339985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.390 [2024-10-01 13:44:18.340052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61835 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61835 ']' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61835 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61835 00:10:08.390 killing process with pid 61835 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61835' 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61835 00:10:08.390 [2024-10-01 13:44:18.530581] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.390 13:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61835 00:10:08.390 [2024-10-01 13:44:18.549025] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.832 ************************************ 00:10:09.832 END TEST raid_state_function_test_sb 00:10:09.832 ************************************ 00:10:09.832 13:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.832 00:10:09.832 real 0m5.155s 00:10:09.832 user 0m7.235s 00:10:09.832 sys 0m0.947s 00:10:09.832 13:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.832 13:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.832 13:44:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:09.832 13:44:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:09.832 13:44:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.832 13:44:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.832 ************************************ 00:10:09.832 START TEST raid_superblock_test 00:10:09.832 ************************************ 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62087 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62087 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62087 ']' 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.832 13:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.832 [2024-10-01 13:44:20.018240] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:09.832 [2024-10-01 13:44:20.018386] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62087 ] 00:10:10.091 [2024-10-01 13:44:20.180602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.350 [2024-10-01 13:44:20.389597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.610 [2024-10-01 13:44:20.596605] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.610 [2024-10-01 13:44:20.596671] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.869 malloc1 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.869 [2024-10-01 13:44:20.985593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.869 [2024-10-01 13:44:20.985825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.869 [2024-10-01 13:44:20.985891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.869 [2024-10-01 13:44:20.985986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.869 [2024-10-01 13:44:20.988663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.869 [2024-10-01 13:44:20.988830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.869 pt1 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:10.869 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.870 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.870 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.870 13:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:10.870 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.870 13:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.870 malloc2 00:10:10.870 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.870 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.870 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.870 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.870 [2024-10-01 13:44:21.060418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.128 [2024-10-01 13:44:21.060649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.128 [2024-10-01 13:44:21.060719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:11.128 [2024-10-01 13:44:21.060816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.128 [2024-10-01 13:44:21.063537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.128 [2024-10-01 13:44:21.063698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.128 pt2 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.128 [2024-10-01 13:44:21.072639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.128 [2024-10-01 13:44:21.074874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.128 [2024-10-01 13:44:21.075235] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:11.128 [2024-10-01 13:44:21.075256] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:11.128 [2024-10-01 13:44:21.075586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:11.128 [2024-10-01 13:44:21.075754] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:11.128 [2024-10-01 13:44:21.075768] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:11.128 [2024-10-01 13:44:21.075938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.128 "name": "raid_bdev1", 00:10:11.128 "uuid": "0430b4c1-4431-4ee9-b9dd-318faa1f815a", 00:10:11.128 "strip_size_kb": 64, 00:10:11.128 "state": "online", 00:10:11.128 "raid_level": "concat", 00:10:11.128 "superblock": true, 00:10:11.128 "num_base_bdevs": 2, 00:10:11.128 "num_base_bdevs_discovered": 2, 00:10:11.128 "num_base_bdevs_operational": 2, 00:10:11.128 "base_bdevs_list": [ 00:10:11.128 { 00:10:11.128 "name": "pt1", 00:10:11.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.128 "is_configured": true, 00:10:11.128 "data_offset": 2048, 00:10:11.128 "data_size": 63488 00:10:11.128 }, 00:10:11.128 { 00:10:11.128 "name": "pt2", 00:10:11.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.128 "is_configured": true, 00:10:11.128 "data_offset": 2048, 00:10:11.128 "data_size": 63488 00:10:11.128 } 00:10:11.128 ] 00:10:11.128 }' 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.128 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.387 [2024-10-01 13:44:21.492331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.387 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.387 "name": "raid_bdev1", 00:10:11.387 "aliases": [ 00:10:11.387 "0430b4c1-4431-4ee9-b9dd-318faa1f815a" 00:10:11.387 ], 00:10:11.387 "product_name": "Raid Volume", 00:10:11.387 "block_size": 512, 00:10:11.387 "num_blocks": 126976, 00:10:11.387 "uuid": "0430b4c1-4431-4ee9-b9dd-318faa1f815a", 00:10:11.387 "assigned_rate_limits": { 00:10:11.387 "rw_ios_per_sec": 0, 00:10:11.387 "rw_mbytes_per_sec": 0, 00:10:11.387 "r_mbytes_per_sec": 0, 00:10:11.387 "w_mbytes_per_sec": 0 00:10:11.387 }, 00:10:11.387 "claimed": false, 00:10:11.387 "zoned": false, 00:10:11.387 "supported_io_types": { 00:10:11.387 "read": true, 00:10:11.387 "write": true, 00:10:11.387 "unmap": true, 00:10:11.387 "flush": true, 00:10:11.387 "reset": true, 00:10:11.387 "nvme_admin": false, 00:10:11.387 "nvme_io": false, 00:10:11.387 "nvme_io_md": false, 00:10:11.387 "write_zeroes": true, 00:10:11.387 "zcopy": false, 00:10:11.387 "get_zone_info": false, 00:10:11.387 "zone_management": false, 00:10:11.387 "zone_append": false, 00:10:11.387 "compare": false, 00:10:11.387 "compare_and_write": false, 00:10:11.387 "abort": false, 00:10:11.387 "seek_hole": false, 00:10:11.387 "seek_data": false, 00:10:11.387 "copy": false, 00:10:11.387 "nvme_iov_md": false 00:10:11.387 }, 00:10:11.387 "memory_domains": [ 00:10:11.387 { 00:10:11.387 "dma_device_id": "system", 00:10:11.387 "dma_device_type": 1 00:10:11.387 }, 00:10:11.387 { 00:10:11.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.387 "dma_device_type": 2 00:10:11.387 }, 00:10:11.387 { 00:10:11.387 "dma_device_id": "system", 00:10:11.387 "dma_device_type": 1 00:10:11.387 }, 00:10:11.387 { 00:10:11.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.387 "dma_device_type": 2 00:10:11.387 } 00:10:11.387 ], 00:10:11.387 "driver_specific": { 00:10:11.387 "raid": { 00:10:11.387 "uuid": "0430b4c1-4431-4ee9-b9dd-318faa1f815a", 00:10:11.387 "strip_size_kb": 64, 00:10:11.387 "state": "online", 00:10:11.387 "raid_level": "concat", 00:10:11.387 "superblock": true, 00:10:11.387 "num_base_bdevs": 2, 00:10:11.387 "num_base_bdevs_discovered": 2, 00:10:11.387 "num_base_bdevs_operational": 2, 00:10:11.387 "base_bdevs_list": [ 00:10:11.387 { 00:10:11.387 "name": "pt1", 00:10:11.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.387 "is_configured": true, 00:10:11.387 "data_offset": 2048, 00:10:11.387 "data_size": 63488 00:10:11.387 }, 00:10:11.387 { 00:10:11.387 "name": "pt2", 00:10:11.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.388 "is_configured": true, 00:10:11.388 "data_offset": 2048, 00:10:11.388 "data_size": 63488 00:10:11.388 } 00:10:11.388 ] 00:10:11.388 } 00:10:11.388 } 00:10:11.388 }' 00:10:11.388 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.388 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.388 pt2' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.647 [2024-10-01 13:44:21.727951] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0430b4c1-4431-4ee9-b9dd-318faa1f815a 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0430b4c1-4431-4ee9-b9dd-318faa1f815a ']' 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.647 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.648 [2024-10-01 13:44:21.767632] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.648 [2024-10-01 13:44:21.767668] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.648 [2024-10-01 13:44:21.767758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.648 [2024-10-01 13:44:21.767812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.648 [2024-10-01 13:44:21.767830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.648 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.907 [2024-10-01 13:44:21.899510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:11.907 [2024-10-01 13:44:21.901863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:11.907 [2024-10-01 13:44:21.902108] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:11.907 [2024-10-01 13:44:21.902176] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:11.907 [2024-10-01 13:44:21.902196] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.907 [2024-10-01 13:44:21.902210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:11.907 request: 00:10:11.907 { 00:10:11.907 "name": "raid_bdev1", 00:10:11.907 "raid_level": "concat", 00:10:11.907 "base_bdevs": [ 00:10:11.907 "malloc1", 00:10:11.907 "malloc2" 00:10:11.907 ], 00:10:11.907 "strip_size_kb": 64, 00:10:11.907 "superblock": false, 00:10:11.907 "method": "bdev_raid_create", 00:10:11.907 "req_id": 1 00:10:11.907 } 00:10:11.907 Got JSON-RPC error response 00:10:11.907 response: 00:10:11.907 { 00:10:11.907 "code": -17, 00:10:11.907 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:11.907 } 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.907 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.907 [2024-10-01 13:44:21.967364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.907 [2024-10-01 13:44:21.967471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.907 [2024-10-01 13:44:21.967500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:11.907 [2024-10-01 13:44:21.967515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.907 [2024-10-01 13:44:21.970209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.907 [2024-10-01 13:44:21.970263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.907 [2024-10-01 13:44:21.970367] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:11.908 [2024-10-01 13:44:21.970450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.908 pt1 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.908 13:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.908 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.908 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.908 "name": "raid_bdev1", 00:10:11.908 "uuid": "0430b4c1-4431-4ee9-b9dd-318faa1f815a", 00:10:11.908 "strip_size_kb": 64, 00:10:11.908 "state": "configuring", 00:10:11.908 "raid_level": "concat", 00:10:11.908 "superblock": true, 00:10:11.908 "num_base_bdevs": 2, 00:10:11.908 "num_base_bdevs_discovered": 1, 00:10:11.908 "num_base_bdevs_operational": 2, 00:10:11.908 "base_bdevs_list": [ 00:10:11.908 { 00:10:11.908 "name": "pt1", 00:10:11.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.908 "is_configured": true, 00:10:11.908 "data_offset": 2048, 00:10:11.908 "data_size": 63488 00:10:11.908 }, 00:10:11.908 { 00:10:11.908 "name": null, 00:10:11.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.908 "is_configured": false, 00:10:11.908 "data_offset": 2048, 00:10:11.908 "data_size": 63488 00:10:11.908 } 00:10:11.908 ] 00:10:11.908 }' 00:10:11.908 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.908 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.476 [2024-10-01 13:44:22.427305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.476 [2024-10-01 13:44:22.427390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.476 [2024-10-01 13:44:22.427428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:12.476 [2024-10-01 13:44:22.427443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.476 [2024-10-01 13:44:22.427966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.476 [2024-10-01 13:44:22.427991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.476 [2024-10-01 13:44:22.428077] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.476 [2024-10-01 13:44:22.428105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.476 [2024-10-01 13:44:22.428232] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:12.476 [2024-10-01 13:44:22.428246] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:12.476 [2024-10-01 13:44:22.428515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:12.476 [2024-10-01 13:44:22.428684] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:12.476 [2024-10-01 13:44:22.428695] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:12.476 [2024-10-01 13:44:22.428843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.476 pt2 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.476 "name": "raid_bdev1", 00:10:12.476 "uuid": "0430b4c1-4431-4ee9-b9dd-318faa1f815a", 00:10:12.476 "strip_size_kb": 64, 00:10:12.476 "state": "online", 00:10:12.476 "raid_level": "concat", 00:10:12.476 "superblock": true, 00:10:12.476 "num_base_bdevs": 2, 00:10:12.476 "num_base_bdevs_discovered": 2, 00:10:12.476 "num_base_bdevs_operational": 2, 00:10:12.476 "base_bdevs_list": [ 00:10:12.476 { 00:10:12.476 "name": "pt1", 00:10:12.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.476 "is_configured": true, 00:10:12.476 "data_offset": 2048, 00:10:12.476 "data_size": 63488 00:10:12.476 }, 00:10:12.476 { 00:10:12.476 "name": "pt2", 00:10:12.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.476 "is_configured": true, 00:10:12.476 "data_offset": 2048, 00:10:12.476 "data_size": 63488 00:10:12.476 } 00:10:12.476 ] 00:10:12.476 }' 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.476 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.734 [2024-10-01 13:44:22.883603] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.734 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.734 "name": "raid_bdev1", 00:10:12.734 "aliases": [ 00:10:12.734 "0430b4c1-4431-4ee9-b9dd-318faa1f815a" 00:10:12.734 ], 00:10:12.734 "product_name": "Raid Volume", 00:10:12.734 "block_size": 512, 00:10:12.734 "num_blocks": 126976, 00:10:12.734 "uuid": "0430b4c1-4431-4ee9-b9dd-318faa1f815a", 00:10:12.734 "assigned_rate_limits": { 00:10:12.734 "rw_ios_per_sec": 0, 00:10:12.734 "rw_mbytes_per_sec": 0, 00:10:12.734 "r_mbytes_per_sec": 0, 00:10:12.734 "w_mbytes_per_sec": 0 00:10:12.734 }, 00:10:12.734 "claimed": false, 00:10:12.734 "zoned": false, 00:10:12.734 "supported_io_types": { 00:10:12.734 "read": true, 00:10:12.735 "write": true, 00:10:12.735 "unmap": true, 00:10:12.735 "flush": true, 00:10:12.735 "reset": true, 00:10:12.735 "nvme_admin": false, 00:10:12.735 "nvme_io": false, 00:10:12.735 "nvme_io_md": false, 00:10:12.735 "write_zeroes": true, 00:10:12.735 "zcopy": false, 00:10:12.735 "get_zone_info": false, 00:10:12.735 "zone_management": false, 00:10:12.735 "zone_append": false, 00:10:12.735 "compare": false, 00:10:12.735 "compare_and_write": false, 00:10:12.735 "abort": false, 00:10:12.735 "seek_hole": false, 00:10:12.735 "seek_data": false, 00:10:12.735 "copy": false, 00:10:12.735 "nvme_iov_md": false 00:10:12.735 }, 00:10:12.735 "memory_domains": [ 00:10:12.735 { 00:10:12.735 "dma_device_id": "system", 00:10:12.735 "dma_device_type": 1 00:10:12.735 }, 00:10:12.735 { 00:10:12.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.735 "dma_device_type": 2 00:10:12.735 }, 00:10:12.735 { 00:10:12.735 "dma_device_id": "system", 00:10:12.735 "dma_device_type": 1 00:10:12.735 }, 00:10:12.735 { 00:10:12.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.735 "dma_device_type": 2 00:10:12.735 } 00:10:12.735 ], 00:10:12.735 "driver_specific": { 00:10:12.735 "raid": { 00:10:12.735 "uuid": "0430b4c1-4431-4ee9-b9dd-318faa1f815a", 00:10:12.735 "strip_size_kb": 64, 00:10:12.735 "state": "online", 00:10:12.735 "raid_level": "concat", 00:10:12.735 "superblock": true, 00:10:12.735 "num_base_bdevs": 2, 00:10:12.735 "num_base_bdevs_discovered": 2, 00:10:12.735 "num_base_bdevs_operational": 2, 00:10:12.735 "base_bdevs_list": [ 00:10:12.735 { 00:10:12.735 "name": "pt1", 00:10:12.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.735 "is_configured": true, 00:10:12.735 "data_offset": 2048, 00:10:12.735 "data_size": 63488 00:10:12.735 }, 00:10:12.735 { 00:10:12.735 "name": "pt2", 00:10:12.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.735 "is_configured": true, 00:10:12.735 "data_offset": 2048, 00:10:12.735 "data_size": 63488 00:10:12.735 } 00:10:12.735 ] 00:10:12.735 } 00:10:12.735 } 00:10:12.735 }' 00:10:12.735 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.062 pt2' 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.062 13:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.062 [2024-10-01 13:44:23.083625] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0430b4c1-4431-4ee9-b9dd-318faa1f815a '!=' 0430b4c1-4431-4ee9-b9dd-318faa1f815a ']' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62087 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62087 ']' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62087 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62087 00:10:13.062 killing process with pid 62087 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62087' 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62087 00:10:13.062 [2024-10-01 13:44:23.162818] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.062 13:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62087 00:10:13.062 [2024-10-01 13:44:23.162933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.062 [2024-10-01 13:44:23.162985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.062 [2024-10-01 13:44:23.163000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:13.321 [2024-10-01 13:44:23.384151] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.698 13:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:14.698 00:10:14.698 real 0m4.784s 00:10:14.698 user 0m6.665s 00:10:14.698 sys 0m0.857s 00:10:14.698 13:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.698 13:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.698 ************************************ 00:10:14.698 END TEST raid_superblock_test 00:10:14.698 ************************************ 00:10:14.698 13:44:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:14.698 13:44:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:14.698 13:44:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.698 13:44:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.698 ************************************ 00:10:14.698 START TEST raid_read_error_test 00:10:14.698 ************************************ 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tCg4iEBguX 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62299 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62299 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62299 ']' 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.698 13:44:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.956 [2024-10-01 13:44:24.898898] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:14.956 [2024-10-01 13:44:24.899034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62299 ] 00:10:14.956 [2024-10-01 13:44:25.069548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.215 [2024-10-01 13:44:25.297234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.474 [2024-10-01 13:44:25.517784] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.474 [2024-10-01 13:44:25.517856] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.732 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.732 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.733 BaseBdev1_malloc 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.733 true 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.733 [2024-10-01 13:44:25.825955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.733 [2024-10-01 13:44:25.826149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.733 [2024-10-01 13:44:25.826181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:15.733 [2024-10-01 13:44:25.826197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.733 [2024-10-01 13:44:25.828780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.733 [2024-10-01 13:44:25.828857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.733 BaseBdev1 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.733 BaseBdev2_malloc 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.733 true 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.733 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.733 [2024-10-01 13:44:25.920015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:15.733 [2024-10-01 13:44:25.920326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.733 [2024-10-01 13:44:25.920386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:15.733 [2024-10-01 13:44:25.920436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.991 [2024-10-01 13:44:25.923826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.991 [2024-10-01 13:44:25.923892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:15.991 BaseBdev2 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.991 [2024-10-01 13:44:25.936244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.991 [2024-10-01 13:44:25.939472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.991 [2024-10-01 13:44:25.939965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.991 [2024-10-01 13:44:25.940162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:15.991 [2024-10-01 13:44:25.940623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:15.991 [2024-10-01 13:44:25.941110] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.991 [2024-10-01 13:44:25.941313] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:15.991 [2024-10-01 13:44:25.941689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.991 13:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.991 13:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.991 "name": "raid_bdev1", 00:10:15.991 "uuid": "3f2c2459-1dfb-463c-a8a0-2a8be268e1a6", 00:10:15.991 "strip_size_kb": 64, 00:10:15.991 "state": "online", 00:10:15.991 "raid_level": "concat", 00:10:15.991 "superblock": true, 00:10:15.991 "num_base_bdevs": 2, 00:10:15.991 "num_base_bdevs_discovered": 2, 00:10:15.991 "num_base_bdevs_operational": 2, 00:10:15.991 "base_bdevs_list": [ 00:10:15.991 { 00:10:15.991 "name": "BaseBdev1", 00:10:15.991 "uuid": "364412f2-7d39-5855-88d5-be3080bc948b", 00:10:15.991 "is_configured": true, 00:10:15.991 "data_offset": 2048, 00:10:15.991 "data_size": 63488 00:10:15.991 }, 00:10:15.991 { 00:10:15.991 "name": "BaseBdev2", 00:10:15.992 "uuid": "29181b47-f6a6-5aa3-b00a-5ac3bfb0846e", 00:10:15.992 "is_configured": true, 00:10:15.992 "data_offset": 2048, 00:10:15.992 "data_size": 63488 00:10:15.992 } 00:10:15.992 ] 00:10:15.992 }' 00:10:15.992 13:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.992 13:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.558 13:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:16.558 13:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:16.558 [2024-10-01 13:44:26.546092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:17.492 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.493 "name": "raid_bdev1", 00:10:17.493 "uuid": "3f2c2459-1dfb-463c-a8a0-2a8be268e1a6", 00:10:17.493 "strip_size_kb": 64, 00:10:17.493 "state": "online", 00:10:17.493 "raid_level": "concat", 00:10:17.493 "superblock": true, 00:10:17.493 "num_base_bdevs": 2, 00:10:17.493 "num_base_bdevs_discovered": 2, 00:10:17.493 "num_base_bdevs_operational": 2, 00:10:17.493 "base_bdevs_list": [ 00:10:17.493 { 00:10:17.493 "name": "BaseBdev1", 00:10:17.493 "uuid": "364412f2-7d39-5855-88d5-be3080bc948b", 00:10:17.493 "is_configured": true, 00:10:17.493 "data_offset": 2048, 00:10:17.493 "data_size": 63488 00:10:17.493 }, 00:10:17.493 { 00:10:17.493 "name": "BaseBdev2", 00:10:17.493 "uuid": "29181b47-f6a6-5aa3-b00a-5ac3bfb0846e", 00:10:17.493 "is_configured": true, 00:10:17.493 "data_offset": 2048, 00:10:17.493 "data_size": 63488 00:10:17.493 } 00:10:17.493 ] 00:10:17.493 }' 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.493 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.751 [2024-10-01 13:44:27.903361] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.751 [2024-10-01 13:44:27.903417] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.751 [2024-10-01 13:44:27.906190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.751 [2024-10-01 13:44:27.906256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.751 [2024-10-01 13:44:27.906289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.751 [2024-10-01 13:44:27.906304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:17.751 { 00:10:17.751 "results": [ 00:10:17.751 { 00:10:17.751 "job": "raid_bdev1", 00:10:17.751 "core_mask": "0x1", 00:10:17.751 "workload": "randrw", 00:10:17.751 "percentage": 50, 00:10:17.751 "status": "finished", 00:10:17.751 "queue_depth": 1, 00:10:17.751 "io_size": 131072, 00:10:17.751 "runtime": 1.357175, 00:10:17.751 "iops": 14880.910715272534, 00:10:17.751 "mibps": 1860.1138394090667, 00:10:17.751 "io_failed": 1, 00:10:17.751 "io_timeout": 0, 00:10:17.751 "avg_latency_us": 92.98871147311432, 00:10:17.751 "min_latency_us": 26.730923694779115, 00:10:17.751 "max_latency_us": 1829.2176706827308 00:10:17.751 } 00:10:17.751 ], 00:10:17.751 "core_count": 1 00:10:17.751 } 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62299 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62299 ']' 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62299 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.751 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62299 00:10:18.009 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.009 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.009 killing process with pid 62299 00:10:18.009 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62299' 00:10:18.009 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62299 00:10:18.009 [2024-10-01 13:44:27.959080] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.010 13:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62299 00:10:18.010 [2024-10-01 13:44:28.105220] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.382 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.382 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tCg4iEBguX 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:19.383 ************************************ 00:10:19.383 END TEST raid_read_error_test 00:10:19.383 ************************************ 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:19.383 00:10:19.383 real 0m4.707s 00:10:19.383 user 0m5.573s 00:10:19.383 sys 0m0.663s 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.383 13:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.383 13:44:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:19.383 13:44:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:19.383 13:44:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.383 13:44:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.383 ************************************ 00:10:19.383 START TEST raid_write_error_test 00:10:19.383 ************************************ 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:19.383 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FMp9t6AJ8c 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62450 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62450 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62450 ']' 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.641 13:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.641 [2024-10-01 13:44:29.669534] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:19.641 [2024-10-01 13:44:29.669687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62450 ] 00:10:19.898 [2024-10-01 13:44:29.842358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.898 [2024-10-01 13:44:30.061282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.157 [2024-10-01 13:44:30.282477] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.157 [2024-10-01 13:44:30.282544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.415 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.415 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:20.415 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.415 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.415 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.415 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.672 BaseBdev1_malloc 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.672 true 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.672 [2024-10-01 13:44:30.634046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.672 [2024-10-01 13:44:30.634111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.672 [2024-10-01 13:44:30.634134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:20.672 [2024-10-01 13:44:30.634150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.672 [2024-10-01 13:44:30.636715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.672 [2024-10-01 13:44:30.636778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.672 BaseBdev1 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.672 BaseBdev2_malloc 00:10:20.672 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 true 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 [2024-10-01 13:44:30.712130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.673 [2024-10-01 13:44:30.712194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.673 [2024-10-01 13:44:30.712217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:20.673 [2024-10-01 13:44:30.712231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.673 [2024-10-01 13:44:30.714753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.673 [2024-10-01 13:44:30.714817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.673 BaseBdev2 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 [2024-10-01 13:44:30.724208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.673 [2024-10-01 13:44:30.726497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.673 [2024-10-01 13:44:30.726701] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.673 [2024-10-01 13:44:30.726717] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:20.673 [2024-10-01 13:44:30.726995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.673 [2024-10-01 13:44:30.727204] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.673 [2024-10-01 13:44:30.727217] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:20.673 [2024-10-01 13:44:30.727386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.673 "name": "raid_bdev1", 00:10:20.673 "uuid": "05c4f997-6637-43d7-90bd-1878f1d06915", 00:10:20.673 "strip_size_kb": 64, 00:10:20.673 "state": "online", 00:10:20.673 "raid_level": "concat", 00:10:20.673 "superblock": true, 00:10:20.673 "num_base_bdevs": 2, 00:10:20.673 "num_base_bdevs_discovered": 2, 00:10:20.673 "num_base_bdevs_operational": 2, 00:10:20.673 "base_bdevs_list": [ 00:10:20.673 { 00:10:20.673 "name": "BaseBdev1", 00:10:20.673 "uuid": "e77bfd5b-ee7f-5e29-8303-b52d62f83ef3", 00:10:20.673 "is_configured": true, 00:10:20.673 "data_offset": 2048, 00:10:20.673 "data_size": 63488 00:10:20.673 }, 00:10:20.673 { 00:10:20.673 "name": "BaseBdev2", 00:10:20.673 "uuid": "b1df093d-e55f-5c7a-bf4f-9e4a6f87444b", 00:10:20.673 "is_configured": true, 00:10:20.673 "data_offset": 2048, 00:10:20.673 "data_size": 63488 00:10:20.673 } 00:10:20.673 ] 00:10:20.673 }' 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.673 13:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.271 13:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.271 13:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.271 [2024-10-01 13:44:31.340769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.222 "name": "raid_bdev1", 00:10:22.222 "uuid": "05c4f997-6637-43d7-90bd-1878f1d06915", 00:10:22.222 "strip_size_kb": 64, 00:10:22.222 "state": "online", 00:10:22.222 "raid_level": "concat", 00:10:22.222 "superblock": true, 00:10:22.222 "num_base_bdevs": 2, 00:10:22.222 "num_base_bdevs_discovered": 2, 00:10:22.222 "num_base_bdevs_operational": 2, 00:10:22.222 "base_bdevs_list": [ 00:10:22.222 { 00:10:22.222 "name": "BaseBdev1", 00:10:22.222 "uuid": "e77bfd5b-ee7f-5e29-8303-b52d62f83ef3", 00:10:22.222 "is_configured": true, 00:10:22.222 "data_offset": 2048, 00:10:22.222 "data_size": 63488 00:10:22.222 }, 00:10:22.222 { 00:10:22.222 "name": "BaseBdev2", 00:10:22.222 "uuid": "b1df093d-e55f-5c7a-bf4f-9e4a6f87444b", 00:10:22.222 "is_configured": true, 00:10:22.222 "data_offset": 2048, 00:10:22.222 "data_size": 63488 00:10:22.222 } 00:10:22.222 ] 00:10:22.222 }' 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.222 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.480 [2024-10-01 13:44:32.645228] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.480 [2024-10-01 13:44:32.645272] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.480 [2024-10-01 13:44:32.648077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.480 [2024-10-01 13:44:32.648129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.480 [2024-10-01 13:44:32.648164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.480 [2024-10-01 13:44:32.648180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:22.480 { 00:10:22.480 "results": [ 00:10:22.480 { 00:10:22.480 "job": "raid_bdev1", 00:10:22.480 "core_mask": "0x1", 00:10:22.480 "workload": "randrw", 00:10:22.480 "percentage": 50, 00:10:22.480 "status": "finished", 00:10:22.480 "queue_depth": 1, 00:10:22.480 "io_size": 131072, 00:10:22.480 "runtime": 1.304502, 00:10:22.480 "iops": 16095.030900680873, 00:10:22.480 "mibps": 2011.8788625851091, 00:10:22.480 "io_failed": 1, 00:10:22.480 "io_timeout": 0, 00:10:22.480 "avg_latency_us": 85.75192791932602, 00:10:22.480 "min_latency_us": 26.52530120481928, 00:10:22.480 "max_latency_us": 1460.7421686746989 00:10:22.480 } 00:10:22.480 ], 00:10:22.480 "core_count": 1 00:10:22.480 } 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62450 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62450 ']' 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62450 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.480 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62450 00:10:22.739 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.739 killing process with pid 62450 00:10:22.739 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.739 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62450' 00:10:22.739 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62450 00:10:22.739 13:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62450 00:10:22.739 [2024-10-01 13:44:32.700845] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.739 [2024-10-01 13:44:32.840375] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FMp9t6AJ8c 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:10:24.115 00:10:24.115 real 0m4.677s 00:10:24.115 user 0m5.603s 00:10:24.115 sys 0m0.637s 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.115 13:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.115 ************************************ 00:10:24.115 END TEST raid_write_error_test 00:10:24.115 ************************************ 00:10:24.115 13:44:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:24.115 13:44:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:24.115 13:44:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:24.115 13:44:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.115 13:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.373 ************************************ 00:10:24.373 START TEST raid_state_function_test 00:10:24.373 ************************************ 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:24.373 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62588 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:24.374 Process raid pid: 62588 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62588' 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62588 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62588 ']' 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.374 13:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.374 [2024-10-01 13:44:34.423980] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:24.374 [2024-10-01 13:44:34.424118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.632 [2024-10-01 13:44:34.601227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.891 [2024-10-01 13:44:34.832026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.891 [2024-10-01 13:44:35.058686] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.891 [2024-10-01 13:44:35.058742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.150 [2024-10-01 13:44:35.282751] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.150 [2024-10-01 13:44:35.282812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.150 [2024-10-01 13:44:35.282828] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.150 [2024-10-01 13:44:35.282842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.150 "name": "Existed_Raid", 00:10:25.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.150 "strip_size_kb": 0, 00:10:25.150 "state": "configuring", 00:10:25.150 "raid_level": "raid1", 00:10:25.150 "superblock": false, 00:10:25.150 "num_base_bdevs": 2, 00:10:25.150 "num_base_bdevs_discovered": 0, 00:10:25.150 "num_base_bdevs_operational": 2, 00:10:25.150 "base_bdevs_list": [ 00:10:25.150 { 00:10:25.150 "name": "BaseBdev1", 00:10:25.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.150 "is_configured": false, 00:10:25.150 "data_offset": 0, 00:10:25.150 "data_size": 0 00:10:25.150 }, 00:10:25.150 { 00:10:25.150 "name": "BaseBdev2", 00:10:25.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.150 "is_configured": false, 00:10:25.150 "data_offset": 0, 00:10:25.150 "data_size": 0 00:10:25.150 } 00:10:25.150 ] 00:10:25.150 }' 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.150 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.718 [2024-10-01 13:44:35.698103] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.718 [2024-10-01 13:44:35.698149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.718 [2024-10-01 13:44:35.710113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.718 [2024-10-01 13:44:35.710168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.718 [2024-10-01 13:44:35.710178] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.718 [2024-10-01 13:44:35.710195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.718 [2024-10-01 13:44:35.770159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.718 BaseBdev1 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.718 [ 00:10:25.718 { 00:10:25.718 "name": "BaseBdev1", 00:10:25.718 "aliases": [ 00:10:25.718 "8c87e33a-6a39-4f32-b338-e32dee0db618" 00:10:25.718 ], 00:10:25.718 "product_name": "Malloc disk", 00:10:25.718 "block_size": 512, 00:10:25.718 "num_blocks": 65536, 00:10:25.718 "uuid": "8c87e33a-6a39-4f32-b338-e32dee0db618", 00:10:25.718 "assigned_rate_limits": { 00:10:25.718 "rw_ios_per_sec": 0, 00:10:25.718 "rw_mbytes_per_sec": 0, 00:10:25.718 "r_mbytes_per_sec": 0, 00:10:25.718 "w_mbytes_per_sec": 0 00:10:25.718 }, 00:10:25.718 "claimed": true, 00:10:25.718 "claim_type": "exclusive_write", 00:10:25.718 "zoned": false, 00:10:25.718 "supported_io_types": { 00:10:25.718 "read": true, 00:10:25.718 "write": true, 00:10:25.718 "unmap": true, 00:10:25.718 "flush": true, 00:10:25.718 "reset": true, 00:10:25.718 "nvme_admin": false, 00:10:25.718 "nvme_io": false, 00:10:25.718 "nvme_io_md": false, 00:10:25.718 "write_zeroes": true, 00:10:25.718 "zcopy": true, 00:10:25.718 "get_zone_info": false, 00:10:25.718 "zone_management": false, 00:10:25.718 "zone_append": false, 00:10:25.718 "compare": false, 00:10:25.718 "compare_and_write": false, 00:10:25.718 "abort": true, 00:10:25.718 "seek_hole": false, 00:10:25.718 "seek_data": false, 00:10:25.718 "copy": true, 00:10:25.718 "nvme_iov_md": false 00:10:25.718 }, 00:10:25.718 "memory_domains": [ 00:10:25.718 { 00:10:25.718 "dma_device_id": "system", 00:10:25.718 "dma_device_type": 1 00:10:25.718 }, 00:10:25.718 { 00:10:25.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.718 "dma_device_type": 2 00:10:25.718 } 00:10:25.718 ], 00:10:25.718 "driver_specific": {} 00:10:25.718 } 00:10:25.718 ] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.718 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.718 "name": "Existed_Raid", 00:10:25.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.718 "strip_size_kb": 0, 00:10:25.718 "state": "configuring", 00:10:25.718 "raid_level": "raid1", 00:10:25.718 "superblock": false, 00:10:25.718 "num_base_bdevs": 2, 00:10:25.718 "num_base_bdevs_discovered": 1, 00:10:25.718 "num_base_bdevs_operational": 2, 00:10:25.719 "base_bdevs_list": [ 00:10:25.719 { 00:10:25.719 "name": "BaseBdev1", 00:10:25.719 "uuid": "8c87e33a-6a39-4f32-b338-e32dee0db618", 00:10:25.719 "is_configured": true, 00:10:25.719 "data_offset": 0, 00:10:25.719 "data_size": 65536 00:10:25.719 }, 00:10:25.719 { 00:10:25.719 "name": "BaseBdev2", 00:10:25.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.719 "is_configured": false, 00:10:25.719 "data_offset": 0, 00:10:25.719 "data_size": 0 00:10:25.719 } 00:10:25.719 ] 00:10:25.719 }' 00:10:25.719 13:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.719 13:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.286 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.287 [2024-10-01 13:44:36.237575] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.287 [2024-10-01 13:44:36.237632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.287 [2024-10-01 13:44:36.249593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.287 [2024-10-01 13:44:36.251825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.287 [2024-10-01 13:44:36.252916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.287 "name": "Existed_Raid", 00:10:26.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.287 "strip_size_kb": 0, 00:10:26.287 "state": "configuring", 00:10:26.287 "raid_level": "raid1", 00:10:26.287 "superblock": false, 00:10:26.287 "num_base_bdevs": 2, 00:10:26.287 "num_base_bdevs_discovered": 1, 00:10:26.287 "num_base_bdevs_operational": 2, 00:10:26.287 "base_bdevs_list": [ 00:10:26.287 { 00:10:26.287 "name": "BaseBdev1", 00:10:26.287 "uuid": "8c87e33a-6a39-4f32-b338-e32dee0db618", 00:10:26.287 "is_configured": true, 00:10:26.287 "data_offset": 0, 00:10:26.287 "data_size": 65536 00:10:26.287 }, 00:10:26.287 { 00:10:26.287 "name": "BaseBdev2", 00:10:26.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.287 "is_configured": false, 00:10:26.287 "data_offset": 0, 00:10:26.287 "data_size": 0 00:10:26.287 } 00:10:26.287 ] 00:10:26.287 }' 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.287 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.554 [2024-10-01 13:44:36.711521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.554 [2024-10-01 13:44:36.711783] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.554 [2024-10-01 13:44:36.711805] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:26.554 [2024-10-01 13:44:36.712126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:26.554 [2024-10-01 13:44:36.712295] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.554 [2024-10-01 13:44:36.712311] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:26.554 [2024-10-01 13:44:36.712635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.554 BaseBdev2 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.554 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.554 [ 00:10:26.554 { 00:10:26.554 "name": "BaseBdev2", 00:10:26.554 "aliases": [ 00:10:26.554 "7d6740d2-576c-43d2-881b-04970aaa7e77" 00:10:26.554 ], 00:10:26.554 "product_name": "Malloc disk", 00:10:26.554 "block_size": 512, 00:10:26.554 "num_blocks": 65536, 00:10:26.554 "uuid": "7d6740d2-576c-43d2-881b-04970aaa7e77", 00:10:26.554 "assigned_rate_limits": { 00:10:26.554 "rw_ios_per_sec": 0, 00:10:26.554 "rw_mbytes_per_sec": 0, 00:10:26.555 "r_mbytes_per_sec": 0, 00:10:26.833 "w_mbytes_per_sec": 0 00:10:26.833 }, 00:10:26.833 "claimed": true, 00:10:26.833 "claim_type": "exclusive_write", 00:10:26.833 "zoned": false, 00:10:26.833 "supported_io_types": { 00:10:26.833 "read": true, 00:10:26.833 "write": true, 00:10:26.833 "unmap": true, 00:10:26.833 "flush": true, 00:10:26.833 "reset": true, 00:10:26.833 "nvme_admin": false, 00:10:26.833 "nvme_io": false, 00:10:26.833 "nvme_io_md": false, 00:10:26.833 "write_zeroes": true, 00:10:26.833 "zcopy": true, 00:10:26.833 "get_zone_info": false, 00:10:26.833 "zone_management": false, 00:10:26.833 "zone_append": false, 00:10:26.833 "compare": false, 00:10:26.833 "compare_and_write": false, 00:10:26.833 "abort": true, 00:10:26.833 "seek_hole": false, 00:10:26.833 "seek_data": false, 00:10:26.833 "copy": true, 00:10:26.833 "nvme_iov_md": false 00:10:26.833 }, 00:10:26.833 "memory_domains": [ 00:10:26.833 { 00:10:26.833 "dma_device_id": "system", 00:10:26.833 "dma_device_type": 1 00:10:26.833 }, 00:10:26.833 { 00:10:26.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.833 "dma_device_type": 2 00:10:26.833 } 00:10:26.833 ], 00:10:26.833 "driver_specific": {} 00:10:26.833 } 00:10:26.833 ] 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.833 "name": "Existed_Raid", 00:10:26.833 "uuid": "2b1202b8-fd09-4acc-a9c9-5a05b1329070", 00:10:26.833 "strip_size_kb": 0, 00:10:26.833 "state": "online", 00:10:26.833 "raid_level": "raid1", 00:10:26.833 "superblock": false, 00:10:26.833 "num_base_bdevs": 2, 00:10:26.833 "num_base_bdevs_discovered": 2, 00:10:26.833 "num_base_bdevs_operational": 2, 00:10:26.833 "base_bdevs_list": [ 00:10:26.833 { 00:10:26.833 "name": "BaseBdev1", 00:10:26.833 "uuid": "8c87e33a-6a39-4f32-b338-e32dee0db618", 00:10:26.833 "is_configured": true, 00:10:26.833 "data_offset": 0, 00:10:26.833 "data_size": 65536 00:10:26.833 }, 00:10:26.833 { 00:10:26.833 "name": "BaseBdev2", 00:10:26.833 "uuid": "7d6740d2-576c-43d2-881b-04970aaa7e77", 00:10:26.833 "is_configured": true, 00:10:26.833 "data_offset": 0, 00:10:26.833 "data_size": 65536 00:10:26.833 } 00:10:26.833 ] 00:10:26.833 }' 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.833 13:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.091 [2024-10-01 13:44:37.219589] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.091 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.091 "name": "Existed_Raid", 00:10:27.091 "aliases": [ 00:10:27.091 "2b1202b8-fd09-4acc-a9c9-5a05b1329070" 00:10:27.091 ], 00:10:27.091 "product_name": "Raid Volume", 00:10:27.091 "block_size": 512, 00:10:27.091 "num_blocks": 65536, 00:10:27.091 "uuid": "2b1202b8-fd09-4acc-a9c9-5a05b1329070", 00:10:27.091 "assigned_rate_limits": { 00:10:27.091 "rw_ios_per_sec": 0, 00:10:27.091 "rw_mbytes_per_sec": 0, 00:10:27.091 "r_mbytes_per_sec": 0, 00:10:27.091 "w_mbytes_per_sec": 0 00:10:27.091 }, 00:10:27.091 "claimed": false, 00:10:27.091 "zoned": false, 00:10:27.091 "supported_io_types": { 00:10:27.091 "read": true, 00:10:27.091 "write": true, 00:10:27.091 "unmap": false, 00:10:27.092 "flush": false, 00:10:27.092 "reset": true, 00:10:27.092 "nvme_admin": false, 00:10:27.092 "nvme_io": false, 00:10:27.092 "nvme_io_md": false, 00:10:27.092 "write_zeroes": true, 00:10:27.092 "zcopy": false, 00:10:27.092 "get_zone_info": false, 00:10:27.092 "zone_management": false, 00:10:27.092 "zone_append": false, 00:10:27.092 "compare": false, 00:10:27.092 "compare_and_write": false, 00:10:27.092 "abort": false, 00:10:27.092 "seek_hole": false, 00:10:27.092 "seek_data": false, 00:10:27.092 "copy": false, 00:10:27.092 "nvme_iov_md": false 00:10:27.092 }, 00:10:27.092 "memory_domains": [ 00:10:27.092 { 00:10:27.092 "dma_device_id": "system", 00:10:27.092 "dma_device_type": 1 00:10:27.092 }, 00:10:27.092 { 00:10:27.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.092 "dma_device_type": 2 00:10:27.092 }, 00:10:27.092 { 00:10:27.092 "dma_device_id": "system", 00:10:27.092 "dma_device_type": 1 00:10:27.092 }, 00:10:27.092 { 00:10:27.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.092 "dma_device_type": 2 00:10:27.092 } 00:10:27.092 ], 00:10:27.092 "driver_specific": { 00:10:27.092 "raid": { 00:10:27.092 "uuid": "2b1202b8-fd09-4acc-a9c9-5a05b1329070", 00:10:27.092 "strip_size_kb": 0, 00:10:27.092 "state": "online", 00:10:27.092 "raid_level": "raid1", 00:10:27.092 "superblock": false, 00:10:27.092 "num_base_bdevs": 2, 00:10:27.092 "num_base_bdevs_discovered": 2, 00:10:27.092 "num_base_bdevs_operational": 2, 00:10:27.092 "base_bdevs_list": [ 00:10:27.092 { 00:10:27.092 "name": "BaseBdev1", 00:10:27.092 "uuid": "8c87e33a-6a39-4f32-b338-e32dee0db618", 00:10:27.092 "is_configured": true, 00:10:27.092 "data_offset": 0, 00:10:27.092 "data_size": 65536 00:10:27.092 }, 00:10:27.092 { 00:10:27.092 "name": "BaseBdev2", 00:10:27.092 "uuid": "7d6740d2-576c-43d2-881b-04970aaa7e77", 00:10:27.092 "is_configured": true, 00:10:27.092 "data_offset": 0, 00:10:27.092 "data_size": 65536 00:10:27.092 } 00:10:27.092 ] 00:10:27.092 } 00:10:27.092 } 00:10:27.092 }' 00:10:27.092 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:27.351 BaseBdev2' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.351 [2024-10-01 13:44:37.435358] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.351 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.611 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.611 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.611 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.611 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.611 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.611 "name": "Existed_Raid", 00:10:27.611 "uuid": "2b1202b8-fd09-4acc-a9c9-5a05b1329070", 00:10:27.611 "strip_size_kb": 0, 00:10:27.611 "state": "online", 00:10:27.611 "raid_level": "raid1", 00:10:27.611 "superblock": false, 00:10:27.611 "num_base_bdevs": 2, 00:10:27.611 "num_base_bdevs_discovered": 1, 00:10:27.611 "num_base_bdevs_operational": 1, 00:10:27.611 "base_bdevs_list": [ 00:10:27.611 { 00:10:27.611 "name": null, 00:10:27.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.611 "is_configured": false, 00:10:27.611 "data_offset": 0, 00:10:27.611 "data_size": 65536 00:10:27.611 }, 00:10:27.611 { 00:10:27.611 "name": "BaseBdev2", 00:10:27.611 "uuid": "7d6740d2-576c-43d2-881b-04970aaa7e77", 00:10:27.611 "is_configured": true, 00:10:27.611 "data_offset": 0, 00:10:27.611 "data_size": 65536 00:10:27.611 } 00:10:27.611 ] 00:10:27.611 }' 00:10:27.611 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.611 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.870 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:27.870 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.870 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.870 13:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:27.870 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.870 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.870 13:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.870 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:27.870 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.870 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:27.870 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.870 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.870 [2024-10-01 13:44:38.011357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.870 [2024-10-01 13:44:38.011469] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.129 [2024-10-01 13:44:38.112626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.129 [2024-10-01 13:44:38.112846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.129 [2024-10-01 13:44:38.113025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:28.129 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.129 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.129 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.129 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62588 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62588 ']' 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62588 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62588 00:10:28.130 killing process with pid 62588 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62588' 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62588 00:10:28.130 [2024-10-01 13:44:38.203423] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.130 13:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62588 00:10:28.130 [2024-10-01 13:44:38.220591] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.513 ************************************ 00:10:29.513 END TEST raid_state_function_test 00:10:29.513 ************************************ 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:29.513 00:10:29.513 real 0m5.226s 00:10:29.513 user 0m7.369s 00:10:29.513 sys 0m0.960s 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.513 13:44:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:29.513 13:44:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:29.513 13:44:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.513 13:44:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.513 ************************************ 00:10:29.513 START TEST raid_state_function_test_sb 00:10:29.513 ************************************ 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62841 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62841' 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.513 Process raid pid: 62841 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62841 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62841 ']' 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.513 13:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.771 [2024-10-01 13:44:39.723446] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:29.771 [2024-10-01 13:44:39.723800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.771 [2024-10-01 13:44:39.899364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.030 [2024-10-01 13:44:40.132863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.289 [2024-10-01 13:44:40.349004] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.289 [2024-10-01 13:44:40.349275] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.547 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.547 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:30.547 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:30.547 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.547 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.547 [2024-10-01 13:44:40.634535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.547 [2024-10-01 13:44:40.634589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.547 [2024-10-01 13:44:40.634604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.547 [2024-10-01 13:44:40.634617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.547 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.548 "name": "Existed_Raid", 00:10:30.548 "uuid": "e4296aac-e3db-4cd1-a7bc-a3968258fa84", 00:10:30.548 "strip_size_kb": 0, 00:10:30.548 "state": "configuring", 00:10:30.548 "raid_level": "raid1", 00:10:30.548 "superblock": true, 00:10:30.548 "num_base_bdevs": 2, 00:10:30.548 "num_base_bdevs_discovered": 0, 00:10:30.548 "num_base_bdevs_operational": 2, 00:10:30.548 "base_bdevs_list": [ 00:10:30.548 { 00:10:30.548 "name": "BaseBdev1", 00:10:30.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.548 "is_configured": false, 00:10:30.548 "data_offset": 0, 00:10:30.548 "data_size": 0 00:10:30.548 }, 00:10:30.548 { 00:10:30.548 "name": "BaseBdev2", 00:10:30.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.548 "is_configured": false, 00:10:30.548 "data_offset": 0, 00:10:30.548 "data_size": 0 00:10:30.548 } 00:10:30.548 ] 00:10:30.548 }' 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.548 13:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.117 [2024-10-01 13:44:41.053849] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.117 [2024-10-01 13:44:41.053892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.117 [2024-10-01 13:44:41.061870] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.117 [2024-10-01 13:44:41.061919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.117 [2024-10-01 13:44:41.061930] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.117 [2024-10-01 13:44:41.061946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.117 [2024-10-01 13:44:41.120464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.117 BaseBdev1 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.117 [ 00:10:31.117 { 00:10:31.117 "name": "BaseBdev1", 00:10:31.117 "aliases": [ 00:10:31.117 "86bc0a7d-bb86-49e8-abb6-cde81ae98a55" 00:10:31.117 ], 00:10:31.117 "product_name": "Malloc disk", 00:10:31.117 "block_size": 512, 00:10:31.117 "num_blocks": 65536, 00:10:31.117 "uuid": "86bc0a7d-bb86-49e8-abb6-cde81ae98a55", 00:10:31.117 "assigned_rate_limits": { 00:10:31.117 "rw_ios_per_sec": 0, 00:10:31.117 "rw_mbytes_per_sec": 0, 00:10:31.117 "r_mbytes_per_sec": 0, 00:10:31.117 "w_mbytes_per_sec": 0 00:10:31.117 }, 00:10:31.117 "claimed": true, 00:10:31.117 "claim_type": "exclusive_write", 00:10:31.117 "zoned": false, 00:10:31.117 "supported_io_types": { 00:10:31.117 "read": true, 00:10:31.117 "write": true, 00:10:31.117 "unmap": true, 00:10:31.117 "flush": true, 00:10:31.117 "reset": true, 00:10:31.117 "nvme_admin": false, 00:10:31.117 "nvme_io": false, 00:10:31.117 "nvme_io_md": false, 00:10:31.117 "write_zeroes": true, 00:10:31.117 "zcopy": true, 00:10:31.117 "get_zone_info": false, 00:10:31.117 "zone_management": false, 00:10:31.117 "zone_append": false, 00:10:31.117 "compare": false, 00:10:31.117 "compare_and_write": false, 00:10:31.117 "abort": true, 00:10:31.117 "seek_hole": false, 00:10:31.117 "seek_data": false, 00:10:31.117 "copy": true, 00:10:31.117 "nvme_iov_md": false 00:10:31.117 }, 00:10:31.117 "memory_domains": [ 00:10:31.117 { 00:10:31.117 "dma_device_id": "system", 00:10:31.117 "dma_device_type": 1 00:10:31.117 }, 00:10:31.117 { 00:10:31.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.117 "dma_device_type": 2 00:10:31.117 } 00:10:31.117 ], 00:10:31.117 "driver_specific": {} 00:10:31.117 } 00:10:31.117 ] 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.117 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.118 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.118 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.118 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.118 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.118 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.118 "name": "Existed_Raid", 00:10:31.118 "uuid": "3c583e61-c549-464a-9e39-5b75ff670ea2", 00:10:31.118 "strip_size_kb": 0, 00:10:31.118 "state": "configuring", 00:10:31.118 "raid_level": "raid1", 00:10:31.118 "superblock": true, 00:10:31.118 "num_base_bdevs": 2, 00:10:31.118 "num_base_bdevs_discovered": 1, 00:10:31.118 "num_base_bdevs_operational": 2, 00:10:31.118 "base_bdevs_list": [ 00:10:31.118 { 00:10:31.118 "name": "BaseBdev1", 00:10:31.118 "uuid": "86bc0a7d-bb86-49e8-abb6-cde81ae98a55", 00:10:31.118 "is_configured": true, 00:10:31.118 "data_offset": 2048, 00:10:31.118 "data_size": 63488 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "name": "BaseBdev2", 00:10:31.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.118 "is_configured": false, 00:10:31.118 "data_offset": 0, 00:10:31.118 "data_size": 0 00:10:31.118 } 00:10:31.118 ] 00:10:31.118 }' 00:10:31.118 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.118 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 [2024-10-01 13:44:41.575896] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.685 [2024-10-01 13:44:41.575955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 [2024-10-01 13:44:41.587915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.685 [2024-10-01 13:44:41.590164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.685 [2024-10-01 13:44:41.590210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:31.685 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.686 "name": "Existed_Raid", 00:10:31.686 "uuid": "a66b6a22-39e3-42ab-b457-c723bf213e43", 00:10:31.686 "strip_size_kb": 0, 00:10:31.686 "state": "configuring", 00:10:31.686 "raid_level": "raid1", 00:10:31.686 "superblock": true, 00:10:31.686 "num_base_bdevs": 2, 00:10:31.686 "num_base_bdevs_discovered": 1, 00:10:31.686 "num_base_bdevs_operational": 2, 00:10:31.686 "base_bdevs_list": [ 00:10:31.686 { 00:10:31.686 "name": "BaseBdev1", 00:10:31.686 "uuid": "86bc0a7d-bb86-49e8-abb6-cde81ae98a55", 00:10:31.686 "is_configured": true, 00:10:31.686 "data_offset": 2048, 00:10:31.686 "data_size": 63488 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "name": "BaseBdev2", 00:10:31.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.686 "is_configured": false, 00:10:31.686 "data_offset": 0, 00:10:31.686 "data_size": 0 00:10:31.686 } 00:10:31.686 ] 00:10:31.686 }' 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.686 13:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.945 [2024-10-01 13:44:42.058992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.945 [2024-10-01 13:44:42.059304] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:31.945 [2024-10-01 13:44:42.059323] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.945 [2024-10-01 13:44:42.059646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:31.945 [2024-10-01 13:44:42.059808] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:31.945 [2024-10-01 13:44:42.059824] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:31.945 BaseBdev2 00:10:31.945 [2024-10-01 13:44:42.059968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.945 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.946 [ 00:10:31.946 { 00:10:31.946 "name": "BaseBdev2", 00:10:31.946 "aliases": [ 00:10:31.946 "087073d1-ec2a-4d1e-91d6-0c43bf03f8a5" 00:10:31.946 ], 00:10:31.946 "product_name": "Malloc disk", 00:10:31.946 "block_size": 512, 00:10:31.946 "num_blocks": 65536, 00:10:31.946 "uuid": "087073d1-ec2a-4d1e-91d6-0c43bf03f8a5", 00:10:31.946 "assigned_rate_limits": { 00:10:31.946 "rw_ios_per_sec": 0, 00:10:31.946 "rw_mbytes_per_sec": 0, 00:10:31.946 "r_mbytes_per_sec": 0, 00:10:31.946 "w_mbytes_per_sec": 0 00:10:31.946 }, 00:10:31.946 "claimed": true, 00:10:31.946 "claim_type": "exclusive_write", 00:10:31.946 "zoned": false, 00:10:31.946 "supported_io_types": { 00:10:31.946 "read": true, 00:10:31.946 "write": true, 00:10:31.946 "unmap": true, 00:10:31.946 "flush": true, 00:10:31.946 "reset": true, 00:10:31.946 "nvme_admin": false, 00:10:31.946 "nvme_io": false, 00:10:31.946 "nvme_io_md": false, 00:10:31.946 "write_zeroes": true, 00:10:31.946 "zcopy": true, 00:10:31.946 "get_zone_info": false, 00:10:31.946 "zone_management": false, 00:10:31.946 "zone_append": false, 00:10:31.946 "compare": false, 00:10:31.946 "compare_and_write": false, 00:10:31.946 "abort": true, 00:10:31.946 "seek_hole": false, 00:10:31.946 "seek_data": false, 00:10:31.946 "copy": true, 00:10:31.946 "nvme_iov_md": false 00:10:31.946 }, 00:10:31.946 "memory_domains": [ 00:10:31.946 { 00:10:31.946 "dma_device_id": "system", 00:10:31.946 "dma_device_type": 1 00:10:31.946 }, 00:10:31.946 { 00:10:31.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.946 "dma_device_type": 2 00:10:31.946 } 00:10:31.946 ], 00:10:31.946 "driver_specific": {} 00:10:31.946 } 00:10:31.946 ] 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.946 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.205 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.205 "name": "Existed_Raid", 00:10:32.205 "uuid": "a66b6a22-39e3-42ab-b457-c723bf213e43", 00:10:32.205 "strip_size_kb": 0, 00:10:32.205 "state": "online", 00:10:32.205 "raid_level": "raid1", 00:10:32.205 "superblock": true, 00:10:32.205 "num_base_bdevs": 2, 00:10:32.205 "num_base_bdevs_discovered": 2, 00:10:32.205 "num_base_bdevs_operational": 2, 00:10:32.205 "base_bdevs_list": [ 00:10:32.205 { 00:10:32.205 "name": "BaseBdev1", 00:10:32.205 "uuid": "86bc0a7d-bb86-49e8-abb6-cde81ae98a55", 00:10:32.205 "is_configured": true, 00:10:32.205 "data_offset": 2048, 00:10:32.205 "data_size": 63488 00:10:32.205 }, 00:10:32.205 { 00:10:32.205 "name": "BaseBdev2", 00:10:32.205 "uuid": "087073d1-ec2a-4d1e-91d6-0c43bf03f8a5", 00:10:32.205 "is_configured": true, 00:10:32.205 "data_offset": 2048, 00:10:32.205 "data_size": 63488 00:10:32.205 } 00:10:32.205 ] 00:10:32.205 }' 00:10:32.205 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.205 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.465 [2024-10-01 13:44:42.506744] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.465 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.465 "name": "Existed_Raid", 00:10:32.465 "aliases": [ 00:10:32.465 "a66b6a22-39e3-42ab-b457-c723bf213e43" 00:10:32.465 ], 00:10:32.465 "product_name": "Raid Volume", 00:10:32.465 "block_size": 512, 00:10:32.465 "num_blocks": 63488, 00:10:32.465 "uuid": "a66b6a22-39e3-42ab-b457-c723bf213e43", 00:10:32.465 "assigned_rate_limits": { 00:10:32.465 "rw_ios_per_sec": 0, 00:10:32.465 "rw_mbytes_per_sec": 0, 00:10:32.465 "r_mbytes_per_sec": 0, 00:10:32.465 "w_mbytes_per_sec": 0 00:10:32.465 }, 00:10:32.465 "claimed": false, 00:10:32.465 "zoned": false, 00:10:32.465 "supported_io_types": { 00:10:32.465 "read": true, 00:10:32.465 "write": true, 00:10:32.465 "unmap": false, 00:10:32.465 "flush": false, 00:10:32.465 "reset": true, 00:10:32.465 "nvme_admin": false, 00:10:32.465 "nvme_io": false, 00:10:32.465 "nvme_io_md": false, 00:10:32.465 "write_zeroes": true, 00:10:32.465 "zcopy": false, 00:10:32.465 "get_zone_info": false, 00:10:32.465 "zone_management": false, 00:10:32.465 "zone_append": false, 00:10:32.465 "compare": false, 00:10:32.465 "compare_and_write": false, 00:10:32.465 "abort": false, 00:10:32.465 "seek_hole": false, 00:10:32.465 "seek_data": false, 00:10:32.465 "copy": false, 00:10:32.465 "nvme_iov_md": false 00:10:32.465 }, 00:10:32.465 "memory_domains": [ 00:10:32.465 { 00:10:32.465 "dma_device_id": "system", 00:10:32.465 "dma_device_type": 1 00:10:32.465 }, 00:10:32.465 { 00:10:32.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.465 "dma_device_type": 2 00:10:32.465 }, 00:10:32.465 { 00:10:32.465 "dma_device_id": "system", 00:10:32.465 "dma_device_type": 1 00:10:32.465 }, 00:10:32.466 { 00:10:32.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.466 "dma_device_type": 2 00:10:32.466 } 00:10:32.466 ], 00:10:32.466 "driver_specific": { 00:10:32.466 "raid": { 00:10:32.466 "uuid": "a66b6a22-39e3-42ab-b457-c723bf213e43", 00:10:32.466 "strip_size_kb": 0, 00:10:32.466 "state": "online", 00:10:32.466 "raid_level": "raid1", 00:10:32.466 "superblock": true, 00:10:32.466 "num_base_bdevs": 2, 00:10:32.466 "num_base_bdevs_discovered": 2, 00:10:32.466 "num_base_bdevs_operational": 2, 00:10:32.466 "base_bdevs_list": [ 00:10:32.466 { 00:10:32.466 "name": "BaseBdev1", 00:10:32.466 "uuid": "86bc0a7d-bb86-49e8-abb6-cde81ae98a55", 00:10:32.466 "is_configured": true, 00:10:32.466 "data_offset": 2048, 00:10:32.466 "data_size": 63488 00:10:32.466 }, 00:10:32.466 { 00:10:32.466 "name": "BaseBdev2", 00:10:32.466 "uuid": "087073d1-ec2a-4d1e-91d6-0c43bf03f8a5", 00:10:32.466 "is_configured": true, 00:10:32.466 "data_offset": 2048, 00:10:32.466 "data_size": 63488 00:10:32.466 } 00:10:32.466 ] 00:10:32.466 } 00:10:32.466 } 00:10:32.466 }' 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:32.466 BaseBdev2' 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.466 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.725 [2024-10-01 13:44:42.702249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.725 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.725 "name": "Existed_Raid", 00:10:32.725 "uuid": "a66b6a22-39e3-42ab-b457-c723bf213e43", 00:10:32.725 "strip_size_kb": 0, 00:10:32.725 "state": "online", 00:10:32.725 "raid_level": "raid1", 00:10:32.725 "superblock": true, 00:10:32.725 "num_base_bdevs": 2, 00:10:32.725 "num_base_bdevs_discovered": 1, 00:10:32.725 "num_base_bdevs_operational": 1, 00:10:32.725 "base_bdevs_list": [ 00:10:32.725 { 00:10:32.725 "name": null, 00:10:32.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.725 "is_configured": false, 00:10:32.725 "data_offset": 0, 00:10:32.725 "data_size": 63488 00:10:32.725 }, 00:10:32.725 { 00:10:32.726 "name": "BaseBdev2", 00:10:32.726 "uuid": "087073d1-ec2a-4d1e-91d6-0c43bf03f8a5", 00:10:32.726 "is_configured": true, 00:10:32.726 "data_offset": 2048, 00:10:32.726 "data_size": 63488 00:10:32.726 } 00:10:32.726 ] 00:10:32.726 }' 00:10:32.726 13:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.726 13:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.293 [2024-10-01 13:44:43.283439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.293 [2024-10-01 13:44:43.283695] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.293 [2024-10-01 13:44:43.387597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.293 [2024-10-01 13:44:43.387660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.293 [2024-10-01 13:44:43.387676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62841 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62841 ']' 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62841 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62841 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:33.293 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:33.553 killing process with pid 62841 00:10:33.553 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62841' 00:10:33.553 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62841 00:10:33.553 [2024-10-01 13:44:43.483966] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.553 13:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62841 00:10:33.553 [2024-10-01 13:44:43.502672] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.933 13:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:34.933 00:10:34.933 real 0m5.225s 00:10:34.933 user 0m7.363s 00:10:34.933 sys 0m0.909s 00:10:34.933 13:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.933 13:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.933 ************************************ 00:10:34.933 END TEST raid_state_function_test_sb 00:10:34.933 ************************************ 00:10:34.933 13:44:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:34.933 13:44:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:34.933 13:44:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.933 13:44:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.933 ************************************ 00:10:34.933 START TEST raid_superblock_test 00:10:34.933 ************************************ 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63099 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63099 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63099 ']' 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.933 13:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.933 [2024-10-01 13:44:45.019787] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:34.933 [2024-10-01 13:44:45.020532] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63099 ] 00:10:35.192 [2024-10-01 13:44:45.191030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.450 [2024-10-01 13:44:45.407563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.450 [2024-10-01 13:44:45.630209] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.450 [2024-10-01 13:44:45.630247] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.708 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.966 malloc1 00:10:35.966 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 [2024-10-01 13:44:45.927191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:35.967 [2024-10-01 13:44:45.927462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.967 [2024-10-01 13:44:45.927534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:35.967 [2024-10-01 13:44:45.927654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.967 [2024-10-01 13:44:45.930502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.967 [2024-10-01 13:44:45.930683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:35.967 pt1 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 malloc2 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.967 13:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 [2024-10-01 13:44:45.997301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.967 [2024-10-01 13:44:45.997531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.967 [2024-10-01 13:44:45.997597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:35.967 [2024-10-01 13:44:45.997704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.967 [2024-10-01 13:44:46.000611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.967 [2024-10-01 13:44:46.000789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.967 pt2 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 [2024-10-01 13:44:46.009534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:35.967 [2024-10-01 13:44:46.011816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.967 [2024-10-01 13:44:46.012005] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:35.967 [2024-10-01 13:44:46.012022] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:35.967 [2024-10-01 13:44:46.012345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:35.967 [2024-10-01 13:44:46.012527] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:35.967 [2024-10-01 13:44:46.012544] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:35.967 [2024-10-01 13:44:46.012707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.967 "name": "raid_bdev1", 00:10:35.967 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:35.967 "strip_size_kb": 0, 00:10:35.967 "state": "online", 00:10:35.967 "raid_level": "raid1", 00:10:35.967 "superblock": true, 00:10:35.967 "num_base_bdevs": 2, 00:10:35.967 "num_base_bdevs_discovered": 2, 00:10:35.967 "num_base_bdevs_operational": 2, 00:10:35.967 "base_bdevs_list": [ 00:10:35.967 { 00:10:35.967 "name": "pt1", 00:10:35.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.967 "is_configured": true, 00:10:35.967 "data_offset": 2048, 00:10:35.967 "data_size": 63488 00:10:35.967 }, 00:10:35.967 { 00:10:35.967 "name": "pt2", 00:10:35.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.967 "is_configured": true, 00:10:35.967 "data_offset": 2048, 00:10:35.967 "data_size": 63488 00:10:35.967 } 00:10:35.967 ] 00:10:35.967 }' 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.967 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.535 [2024-10-01 13:44:46.521080] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.535 "name": "raid_bdev1", 00:10:36.535 "aliases": [ 00:10:36.535 "1381ec90-ae36-4bcb-b386-e00390a64171" 00:10:36.535 ], 00:10:36.535 "product_name": "Raid Volume", 00:10:36.535 "block_size": 512, 00:10:36.535 "num_blocks": 63488, 00:10:36.535 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:36.535 "assigned_rate_limits": { 00:10:36.535 "rw_ios_per_sec": 0, 00:10:36.535 "rw_mbytes_per_sec": 0, 00:10:36.535 "r_mbytes_per_sec": 0, 00:10:36.535 "w_mbytes_per_sec": 0 00:10:36.535 }, 00:10:36.535 "claimed": false, 00:10:36.535 "zoned": false, 00:10:36.535 "supported_io_types": { 00:10:36.535 "read": true, 00:10:36.535 "write": true, 00:10:36.535 "unmap": false, 00:10:36.535 "flush": false, 00:10:36.535 "reset": true, 00:10:36.535 "nvme_admin": false, 00:10:36.535 "nvme_io": false, 00:10:36.535 "nvme_io_md": false, 00:10:36.535 "write_zeroes": true, 00:10:36.535 "zcopy": false, 00:10:36.535 "get_zone_info": false, 00:10:36.535 "zone_management": false, 00:10:36.535 "zone_append": false, 00:10:36.535 "compare": false, 00:10:36.535 "compare_and_write": false, 00:10:36.535 "abort": false, 00:10:36.535 "seek_hole": false, 00:10:36.535 "seek_data": false, 00:10:36.535 "copy": false, 00:10:36.535 "nvme_iov_md": false 00:10:36.535 }, 00:10:36.535 "memory_domains": [ 00:10:36.535 { 00:10:36.535 "dma_device_id": "system", 00:10:36.535 "dma_device_type": 1 00:10:36.535 }, 00:10:36.535 { 00:10:36.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.535 "dma_device_type": 2 00:10:36.535 }, 00:10:36.535 { 00:10:36.535 "dma_device_id": "system", 00:10:36.535 "dma_device_type": 1 00:10:36.535 }, 00:10:36.535 { 00:10:36.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.535 "dma_device_type": 2 00:10:36.535 } 00:10:36.535 ], 00:10:36.535 "driver_specific": { 00:10:36.535 "raid": { 00:10:36.535 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:36.535 "strip_size_kb": 0, 00:10:36.535 "state": "online", 00:10:36.535 "raid_level": "raid1", 00:10:36.535 "superblock": true, 00:10:36.535 "num_base_bdevs": 2, 00:10:36.535 "num_base_bdevs_discovered": 2, 00:10:36.535 "num_base_bdevs_operational": 2, 00:10:36.535 "base_bdevs_list": [ 00:10:36.535 { 00:10:36.535 "name": "pt1", 00:10:36.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.535 "is_configured": true, 00:10:36.535 "data_offset": 2048, 00:10:36.535 "data_size": 63488 00:10:36.535 }, 00:10:36.535 { 00:10:36.535 "name": "pt2", 00:10:36.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.535 "is_configured": true, 00:10:36.535 "data_offset": 2048, 00:10:36.535 "data_size": 63488 00:10:36.535 } 00:10:36.535 ] 00:10:36.535 } 00:10:36.535 } 00:10:36.535 }' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.535 pt2' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.535 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.795 [2024-10-01 13:44:46.752751] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1381ec90-ae36-4bcb-b386-e00390a64171 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1381ec90-ae36-4bcb-b386-e00390a64171 ']' 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.795 [2024-10-01 13:44:46.796452] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.795 [2024-10-01 13:44:46.796481] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.795 [2024-10-01 13:44:46.796571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.795 [2024-10-01 13:44:46.796639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.795 [2024-10-01 13:44:46.796655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.795 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.796 [2024-10-01 13:44:46.932296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:36.796 [2024-10-01 13:44:46.934694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:36.796 [2024-10-01 13:44:46.934879] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:36.796 [2024-10-01 13:44:46.935073] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:36.796 [2024-10-01 13:44:46.935197] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.796 [2024-10-01 13:44:46.935333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:36.796 request: 00:10:36.796 { 00:10:36.796 "name": "raid_bdev1", 00:10:36.796 "raid_level": "raid1", 00:10:36.796 "base_bdevs": [ 00:10:36.796 "malloc1", 00:10:36.796 "malloc2" 00:10:36.796 ], 00:10:36.796 "superblock": false, 00:10:36.796 "method": "bdev_raid_create", 00:10:36.796 "req_id": 1 00:10:36.796 } 00:10:36.796 Got JSON-RPC error response 00:10:36.796 response: 00:10:36.796 { 00:10:36.796 "code": -17, 00:10:36.796 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:36.796 } 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:36.796 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.054 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:37.054 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:37.054 13:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.054 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.054 13:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.054 [2024-10-01 13:44:47.004132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.054 [2024-10-01 13:44:47.004198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.054 [2024-10-01 13:44:47.004221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:37.054 [2024-10-01 13:44:47.004236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.054 [2024-10-01 13:44:47.006811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.054 [2024-10-01 13:44:47.006874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.054 [2024-10-01 13:44:47.006958] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:37.054 [2024-10-01 13:44:47.007026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.054 pt1 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.054 "name": "raid_bdev1", 00:10:37.054 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:37.054 "strip_size_kb": 0, 00:10:37.054 "state": "configuring", 00:10:37.054 "raid_level": "raid1", 00:10:37.054 "superblock": true, 00:10:37.054 "num_base_bdevs": 2, 00:10:37.054 "num_base_bdevs_discovered": 1, 00:10:37.054 "num_base_bdevs_operational": 2, 00:10:37.054 "base_bdevs_list": [ 00:10:37.054 { 00:10:37.054 "name": "pt1", 00:10:37.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.054 "is_configured": true, 00:10:37.054 "data_offset": 2048, 00:10:37.054 "data_size": 63488 00:10:37.054 }, 00:10:37.054 { 00:10:37.054 "name": null, 00:10:37.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.054 "is_configured": false, 00:10:37.054 "data_offset": 2048, 00:10:37.054 "data_size": 63488 00:10:37.054 } 00:10:37.054 ] 00:10:37.054 }' 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.054 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.311 [2024-10-01 13:44:47.475542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.311 [2024-10-01 13:44:47.475620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.311 [2024-10-01 13:44:47.475646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:37.311 [2024-10-01 13:44:47.475661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.311 [2024-10-01 13:44:47.476182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.311 [2024-10-01 13:44:47.476213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.311 [2024-10-01 13:44:47.476296] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:37.311 [2024-10-01 13:44:47.476337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.311 [2024-10-01 13:44:47.476475] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:37.311 [2024-10-01 13:44:47.476489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:37.311 [2024-10-01 13:44:47.476743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:37.311 [2024-10-01 13:44:47.476901] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:37.311 [2024-10-01 13:44:47.476911] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:37.311 [2024-10-01 13:44:47.477061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.311 pt2 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:37.311 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.312 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.569 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.569 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.569 "name": "raid_bdev1", 00:10:37.569 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:37.569 "strip_size_kb": 0, 00:10:37.569 "state": "online", 00:10:37.569 "raid_level": "raid1", 00:10:37.569 "superblock": true, 00:10:37.569 "num_base_bdevs": 2, 00:10:37.569 "num_base_bdevs_discovered": 2, 00:10:37.569 "num_base_bdevs_operational": 2, 00:10:37.569 "base_bdevs_list": [ 00:10:37.569 { 00:10:37.569 "name": "pt1", 00:10:37.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.569 "is_configured": true, 00:10:37.569 "data_offset": 2048, 00:10:37.569 "data_size": 63488 00:10:37.569 }, 00:10:37.569 { 00:10:37.569 "name": "pt2", 00:10:37.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.569 "is_configured": true, 00:10:37.569 "data_offset": 2048, 00:10:37.569 "data_size": 63488 00:10:37.569 } 00:10:37.569 ] 00:10:37.569 }' 00:10:37.569 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.569 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.827 [2024-10-01 13:44:47.903625] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.827 "name": "raid_bdev1", 00:10:37.827 "aliases": [ 00:10:37.827 "1381ec90-ae36-4bcb-b386-e00390a64171" 00:10:37.827 ], 00:10:37.827 "product_name": "Raid Volume", 00:10:37.827 "block_size": 512, 00:10:37.827 "num_blocks": 63488, 00:10:37.827 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:37.827 "assigned_rate_limits": { 00:10:37.827 "rw_ios_per_sec": 0, 00:10:37.827 "rw_mbytes_per_sec": 0, 00:10:37.827 "r_mbytes_per_sec": 0, 00:10:37.827 "w_mbytes_per_sec": 0 00:10:37.827 }, 00:10:37.827 "claimed": false, 00:10:37.827 "zoned": false, 00:10:37.827 "supported_io_types": { 00:10:37.827 "read": true, 00:10:37.827 "write": true, 00:10:37.827 "unmap": false, 00:10:37.827 "flush": false, 00:10:37.827 "reset": true, 00:10:37.827 "nvme_admin": false, 00:10:37.827 "nvme_io": false, 00:10:37.827 "nvme_io_md": false, 00:10:37.827 "write_zeroes": true, 00:10:37.827 "zcopy": false, 00:10:37.827 "get_zone_info": false, 00:10:37.827 "zone_management": false, 00:10:37.827 "zone_append": false, 00:10:37.827 "compare": false, 00:10:37.827 "compare_and_write": false, 00:10:37.827 "abort": false, 00:10:37.827 "seek_hole": false, 00:10:37.827 "seek_data": false, 00:10:37.827 "copy": false, 00:10:37.827 "nvme_iov_md": false 00:10:37.827 }, 00:10:37.827 "memory_domains": [ 00:10:37.827 { 00:10:37.827 "dma_device_id": "system", 00:10:37.827 "dma_device_type": 1 00:10:37.827 }, 00:10:37.827 { 00:10:37.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.827 "dma_device_type": 2 00:10:37.827 }, 00:10:37.827 { 00:10:37.827 "dma_device_id": "system", 00:10:37.827 "dma_device_type": 1 00:10:37.827 }, 00:10:37.827 { 00:10:37.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.827 "dma_device_type": 2 00:10:37.827 } 00:10:37.827 ], 00:10:37.827 "driver_specific": { 00:10:37.827 "raid": { 00:10:37.827 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:37.827 "strip_size_kb": 0, 00:10:37.827 "state": "online", 00:10:37.827 "raid_level": "raid1", 00:10:37.827 "superblock": true, 00:10:37.827 "num_base_bdevs": 2, 00:10:37.827 "num_base_bdevs_discovered": 2, 00:10:37.827 "num_base_bdevs_operational": 2, 00:10:37.827 "base_bdevs_list": [ 00:10:37.827 { 00:10:37.827 "name": "pt1", 00:10:37.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.827 "is_configured": true, 00:10:37.827 "data_offset": 2048, 00:10:37.827 "data_size": 63488 00:10:37.827 }, 00:10:37.827 { 00:10:37.827 "name": "pt2", 00:10:37.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.827 "is_configured": true, 00:10:37.827 "data_offset": 2048, 00:10:37.827 "data_size": 63488 00:10:37.827 } 00:10:37.827 ] 00:10:37.827 } 00:10:37.827 } 00:10:37.827 }' 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:37.827 pt2' 00:10:37.827 13:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.086 [2024-10-01 13:44:48.135627] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1381ec90-ae36-4bcb-b386-e00390a64171 '!=' 1381ec90-ae36-4bcb-b386-e00390a64171 ']' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.086 [2024-10-01 13:44:48.195384] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.086 "name": "raid_bdev1", 00:10:38.086 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:38.086 "strip_size_kb": 0, 00:10:38.086 "state": "online", 00:10:38.086 "raid_level": "raid1", 00:10:38.086 "superblock": true, 00:10:38.086 "num_base_bdevs": 2, 00:10:38.086 "num_base_bdevs_discovered": 1, 00:10:38.086 "num_base_bdevs_operational": 1, 00:10:38.086 "base_bdevs_list": [ 00:10:38.086 { 00:10:38.086 "name": null, 00:10:38.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.086 "is_configured": false, 00:10:38.086 "data_offset": 0, 00:10:38.086 "data_size": 63488 00:10:38.086 }, 00:10:38.086 { 00:10:38.086 "name": "pt2", 00:10:38.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.086 "is_configured": true, 00:10:38.086 "data_offset": 2048, 00:10:38.086 "data_size": 63488 00:10:38.086 } 00:10:38.086 ] 00:10:38.086 }' 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.086 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.653 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.653 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.653 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.653 [2024-10-01 13:44:48.671353] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.654 [2024-10-01 13:44:48.671531] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.654 [2024-10-01 13:44:48.671642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.654 [2024-10-01 13:44:48.671691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.654 [2024-10-01 13:44:48.671707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.654 [2024-10-01 13:44:48.747402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.654 [2024-10-01 13:44:48.747618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.654 [2024-10-01 13:44:48.747674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:38.654 [2024-10-01 13:44:48.747750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.654 [2024-10-01 13:44:48.750464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.654 [2024-10-01 13:44:48.750611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.654 [2024-10-01 13:44:48.750795] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.654 [2024-10-01 13:44:48.750959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.654 [2024-10-01 13:44:48.751114] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:38.654 [2024-10-01 13:44:48.751135] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.654 [2024-10-01 13:44:48.751446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:38.654 [2024-10-01 13:44:48.751606] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:38.654 [2024-10-01 13:44:48.751617] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:38.654 [2024-10-01 13:44:48.751824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.654 pt2 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.654 "name": "raid_bdev1", 00:10:38.654 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:38.654 "strip_size_kb": 0, 00:10:38.654 "state": "online", 00:10:38.654 "raid_level": "raid1", 00:10:38.654 "superblock": true, 00:10:38.654 "num_base_bdevs": 2, 00:10:38.654 "num_base_bdevs_discovered": 1, 00:10:38.654 "num_base_bdevs_operational": 1, 00:10:38.654 "base_bdevs_list": [ 00:10:38.654 { 00:10:38.654 "name": null, 00:10:38.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.654 "is_configured": false, 00:10:38.654 "data_offset": 2048, 00:10:38.654 "data_size": 63488 00:10:38.654 }, 00:10:38.654 { 00:10:38.654 "name": "pt2", 00:10:38.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.654 "is_configured": true, 00:10:38.654 "data_offset": 2048, 00:10:38.654 "data_size": 63488 00:10:38.654 } 00:10:38.654 ] 00:10:38.654 }' 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.654 13:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.246 [2024-10-01 13:44:49.147352] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.246 [2024-10-01 13:44:49.147392] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.246 [2024-10-01 13:44:49.147494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.246 [2024-10-01 13:44:49.147564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.246 [2024-10-01 13:44:49.147577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.246 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.246 [2024-10-01 13:44:49.207383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:39.246 [2024-10-01 13:44:49.207464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.246 [2024-10-01 13:44:49.207489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:39.246 [2024-10-01 13:44:49.207502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.246 [2024-10-01 13:44:49.210211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.246 [2024-10-01 13:44:49.210257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:39.246 [2024-10-01 13:44:49.210357] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:39.246 [2024-10-01 13:44:49.210423] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:39.246 [2024-10-01 13:44:49.210575] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:39.246 [2024-10-01 13:44:49.210588] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.246 [2024-10-01 13:44:49.210610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:39.246 [2024-10-01 13:44:49.210685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.246 [2024-10-01 13:44:49.210764] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:39.246 [2024-10-01 13:44:49.210774] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:39.246 [2024-10-01 13:44:49.211042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:39.246 [2024-10-01 13:44:49.211208] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:39.247 [2024-10-01 13:44:49.211224] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:39.247 [2024-10-01 13:44:49.211599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.247 pt1 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.247 "name": "raid_bdev1", 00:10:39.247 "uuid": "1381ec90-ae36-4bcb-b386-e00390a64171", 00:10:39.247 "strip_size_kb": 0, 00:10:39.247 "state": "online", 00:10:39.247 "raid_level": "raid1", 00:10:39.247 "superblock": true, 00:10:39.247 "num_base_bdevs": 2, 00:10:39.247 "num_base_bdevs_discovered": 1, 00:10:39.247 "num_base_bdevs_operational": 1, 00:10:39.247 "base_bdevs_list": [ 00:10:39.247 { 00:10:39.247 "name": null, 00:10:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.247 "is_configured": false, 00:10:39.247 "data_offset": 2048, 00:10:39.247 "data_size": 63488 00:10:39.247 }, 00:10:39.247 { 00:10:39.247 "name": "pt2", 00:10:39.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.247 "is_configured": true, 00:10:39.247 "data_offset": 2048, 00:10:39.247 "data_size": 63488 00:10:39.247 } 00:10:39.247 ] 00:10:39.247 }' 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.247 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.505 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:39.505 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.505 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.505 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:39.505 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.764 [2024-10-01 13:44:49.715602] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1381ec90-ae36-4bcb-b386-e00390a64171 '!=' 1381ec90-ae36-4bcb-b386-e00390a64171 ']' 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63099 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63099 ']' 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63099 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63099 00:10:39.764 killing process with pid 63099 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63099' 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63099 00:10:39.764 [2024-10-01 13:44:49.788949] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.764 [2024-10-01 13:44:49.789050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.764 13:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63099 00:10:39.764 [2024-10-01 13:44:49.789101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.764 [2024-10-01 13:44:49.789120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:40.023 [2024-10-01 13:44:50.002080] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.401 ************************************ 00:10:41.401 END TEST raid_superblock_test 00:10:41.401 ************************************ 00:10:41.401 13:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:41.401 00:10:41.401 real 0m6.389s 00:10:41.401 user 0m9.603s 00:10:41.401 sys 0m1.204s 00:10:41.401 13:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.401 13:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.402 13:44:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:41.402 13:44:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:41.402 13:44:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.402 13:44:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.402 ************************************ 00:10:41.402 START TEST raid_read_error_test 00:10:41.402 ************************************ 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lC3W9VreOF 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63429 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63429 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63429 ']' 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.402 13:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.402 [2024-10-01 13:44:51.491637] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:41.402 [2024-10-01 13:44:51.491972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63429 ] 00:10:41.661 [2024-10-01 13:44:51.656736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.920 [2024-10-01 13:44:51.923344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.180 [2024-10-01 13:44:52.139606] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.180 [2024-10-01 13:44:52.139641] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.180 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.180 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:42.180 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.180 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:42.180 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.180 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.440 BaseBdev1_malloc 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.440 true 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.440 [2024-10-01 13:44:52.399577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:42.440 [2024-10-01 13:44:52.399640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.440 [2024-10-01 13:44:52.399663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:42.440 [2024-10-01 13:44:52.399678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.440 [2024-10-01 13:44:52.402142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.440 [2024-10-01 13:44:52.402321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:42.440 BaseBdev1 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.440 BaseBdev2_malloc 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.440 true 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.440 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.440 [2024-10-01 13:44:52.474969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:42.440 [2024-10-01 13:44:52.475032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.441 [2024-10-01 13:44:52.475070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:42.441 [2024-10-01 13:44:52.475084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.441 [2024-10-01 13:44:52.477636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.441 [2024-10-01 13:44:52.477811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:42.441 BaseBdev2 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.441 [2024-10-01 13:44:52.487015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.441 [2024-10-01 13:44:52.489300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.441 [2024-10-01 13:44:52.489650] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.441 [2024-10-01 13:44:52.489675] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:42.441 [2024-10-01 13:44:52.489947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:42.441 [2024-10-01 13:44:52.490119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.441 [2024-10-01 13:44:52.490130] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:42.441 [2024-10-01 13:44:52.490314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.441 "name": "raid_bdev1", 00:10:42.441 "uuid": "0bc06185-e2af-4677-8355-44a31f5cdd3b", 00:10:42.441 "strip_size_kb": 0, 00:10:42.441 "state": "online", 00:10:42.441 "raid_level": "raid1", 00:10:42.441 "superblock": true, 00:10:42.441 "num_base_bdevs": 2, 00:10:42.441 "num_base_bdevs_discovered": 2, 00:10:42.441 "num_base_bdevs_operational": 2, 00:10:42.441 "base_bdevs_list": [ 00:10:42.441 { 00:10:42.441 "name": "BaseBdev1", 00:10:42.441 "uuid": "0d19fa92-84c8-5a14-982d-01eaf886b8e4", 00:10:42.441 "is_configured": true, 00:10:42.441 "data_offset": 2048, 00:10:42.441 "data_size": 63488 00:10:42.441 }, 00:10:42.441 { 00:10:42.441 "name": "BaseBdev2", 00:10:42.441 "uuid": "2d1cf34c-3fc1-524e-9976-0a4a9c358742", 00:10:42.441 "is_configured": true, 00:10:42.441 "data_offset": 2048, 00:10:42.441 "data_size": 63488 00:10:42.441 } 00:10:42.441 ] 00:10:42.441 }' 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.441 13:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.051 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.051 13:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:43.051 [2024-10-01 13:44:53.063784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.014 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.015 13:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.015 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.015 13:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.015 "name": "raid_bdev1", 00:10:44.015 "uuid": "0bc06185-e2af-4677-8355-44a31f5cdd3b", 00:10:44.015 "strip_size_kb": 0, 00:10:44.015 "state": "online", 00:10:44.015 "raid_level": "raid1", 00:10:44.015 "superblock": true, 00:10:44.015 "num_base_bdevs": 2, 00:10:44.015 "num_base_bdevs_discovered": 2, 00:10:44.015 "num_base_bdevs_operational": 2, 00:10:44.015 "base_bdevs_list": [ 00:10:44.015 { 00:10:44.015 "name": "BaseBdev1", 00:10:44.015 "uuid": "0d19fa92-84c8-5a14-982d-01eaf886b8e4", 00:10:44.015 "is_configured": true, 00:10:44.015 "data_offset": 2048, 00:10:44.015 "data_size": 63488 00:10:44.015 }, 00:10:44.015 { 00:10:44.015 "name": "BaseBdev2", 00:10:44.015 "uuid": "2d1cf34c-3fc1-524e-9976-0a4a9c358742", 00:10:44.015 "is_configured": true, 00:10:44.015 "data_offset": 2048, 00:10:44.015 "data_size": 63488 00:10:44.015 } 00:10:44.015 ] 00:10:44.015 }' 00:10:44.015 13:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.015 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.274 13:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.274 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.274 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.274 [2024-10-01 13:44:54.414362] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.274 [2024-10-01 13:44:54.414427] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.274 [2024-10-01 13:44:54.417702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.274 [2024-10-01 13:44:54.417919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.274 [2024-10-01 13:44:54.418104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.274 [2024-10-01 13:44:54.418272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:44.274 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.274 { 00:10:44.274 "results": [ 00:10:44.274 { 00:10:44.275 "job": "raid_bdev1", 00:10:44.275 "core_mask": "0x1", 00:10:44.275 "workload": "randrw", 00:10:44.275 "percentage": 50, 00:10:44.275 "status": "finished", 00:10:44.275 "queue_depth": 1, 00:10:44.275 "io_size": 131072, 00:10:44.275 "runtime": 1.350804, 00:10:44.275 "iops": 17914.516095599363, 00:10:44.275 "mibps": 2239.3145119499204, 00:10:44.275 "io_failed": 0, 00:10:44.275 "io_timeout": 0, 00:10:44.275 "avg_latency_us": 53.157976623216705, 00:10:44.275 "min_latency_us": 23.646586345381525, 00:10:44.275 "max_latency_us": 1421.2626506024096 00:10:44.275 } 00:10:44.275 ], 00:10:44.275 "core_count": 1 00:10:44.275 } 00:10:44.275 13:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63429 00:10:44.275 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63429 ']' 00:10:44.275 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63429 00:10:44.275 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:44.275 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.275 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63429 00:10:44.534 killing process with pid 63429 00:10:44.534 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.534 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.534 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63429' 00:10:44.534 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63429 00:10:44.534 [2024-10-01 13:44:54.468869] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.534 13:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63429 00:10:44.534 [2024-10-01 13:44:54.611982] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lC3W9VreOF 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:45.926 00:10:45.926 real 0m4.682s 00:10:45.926 user 0m5.494s 00:10:45.926 sys 0m0.660s 00:10:45.926 ************************************ 00:10:45.926 END TEST raid_read_error_test 00:10:45.926 ************************************ 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.926 13:44:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.926 13:44:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:45.926 13:44:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:45.926 13:44:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.926 13:44:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.185 ************************************ 00:10:46.185 START TEST raid_write_error_test 00:10:46.185 ************************************ 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cXXCZ2B3MS 00:10:46.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63569 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63569 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63569 ']' 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.185 13:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.185 [2024-10-01 13:44:56.251440] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:46.185 [2024-10-01 13:44:56.251752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63569 ] 00:10:46.444 [2024-10-01 13:44:56.409945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.702 [2024-10-01 13:44:56.650120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.702 [2024-10-01 13:44:56.889566] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.702 [2024-10-01 13:44:56.889833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.960 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.960 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:46.960 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.960 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.960 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.960 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 BaseBdev1_malloc 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 true 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 [2024-10-01 13:44:57.200820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:47.220 [2024-10-01 13:44:57.201013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.220 [2024-10-01 13:44:57.201130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:47.220 [2024-10-01 13:44:57.201245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.220 [2024-10-01 13:44:57.203892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.220 [2024-10-01 13:44:57.203939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:47.220 BaseBdev1 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 BaseBdev2_malloc 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 true 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 [2024-10-01 13:44:57.287470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:47.220 [2024-10-01 13:44:57.287650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.220 [2024-10-01 13:44:57.287708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:47.220 [2024-10-01 13:44:57.287816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.220 [2024-10-01 13:44:57.290570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.220 [2024-10-01 13:44:57.290725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:47.220 BaseBdev2 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 [2024-10-01 13:44:57.299553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.220 [2024-10-01 13:44:57.301797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.220 [2024-10-01 13:44:57.302009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.220 [2024-10-01 13:44:57.302030] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.220 [2024-10-01 13:44:57.302316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:47.220 [2024-10-01 13:44:57.302541] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.220 [2024-10-01 13:44:57.302570] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:47.220 [2024-10-01 13:44:57.302750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.220 "name": "raid_bdev1", 00:10:47.220 "uuid": "6c89acd3-2be8-4cb7-b73e-ecabefc26944", 00:10:47.220 "strip_size_kb": 0, 00:10:47.220 "state": "online", 00:10:47.220 "raid_level": "raid1", 00:10:47.220 "superblock": true, 00:10:47.220 "num_base_bdevs": 2, 00:10:47.220 "num_base_bdevs_discovered": 2, 00:10:47.220 "num_base_bdevs_operational": 2, 00:10:47.220 "base_bdevs_list": [ 00:10:47.220 { 00:10:47.220 "name": "BaseBdev1", 00:10:47.220 "uuid": "c4a8394a-a6f1-5ae1-966d-c90997b8f066", 00:10:47.220 "is_configured": true, 00:10:47.220 "data_offset": 2048, 00:10:47.220 "data_size": 63488 00:10:47.220 }, 00:10:47.220 { 00:10:47.220 "name": "BaseBdev2", 00:10:47.220 "uuid": "e0fbe167-f21c-5d3d-95c1-3e708dd97aa2", 00:10:47.220 "is_configured": true, 00:10:47.220 "data_offset": 2048, 00:10:47.220 "data_size": 63488 00:10:47.220 } 00:10:47.220 ] 00:10:47.220 }' 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.220 13:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.788 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:47.788 13:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:47.788 [2024-10-01 13:44:57.840832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.723 [2024-10-01 13:44:58.761821] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:48.723 [2024-10-01 13:44:58.761892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.723 [2024-10-01 13:44:58.762087] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.723 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.723 "name": "raid_bdev1", 00:10:48.723 "uuid": "6c89acd3-2be8-4cb7-b73e-ecabefc26944", 00:10:48.724 "strip_size_kb": 0, 00:10:48.724 "state": "online", 00:10:48.724 "raid_level": "raid1", 00:10:48.724 "superblock": true, 00:10:48.724 "num_base_bdevs": 2, 00:10:48.724 "num_base_bdevs_discovered": 1, 00:10:48.724 "num_base_bdevs_operational": 1, 00:10:48.724 "base_bdevs_list": [ 00:10:48.724 { 00:10:48.724 "name": null, 00:10:48.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.724 "is_configured": false, 00:10:48.724 "data_offset": 0, 00:10:48.724 "data_size": 63488 00:10:48.724 }, 00:10:48.724 { 00:10:48.724 "name": "BaseBdev2", 00:10:48.724 "uuid": "e0fbe167-f21c-5d3d-95c1-3e708dd97aa2", 00:10:48.724 "is_configured": true, 00:10:48.724 "data_offset": 2048, 00:10:48.724 "data_size": 63488 00:10:48.724 } 00:10:48.724 ] 00:10:48.724 }' 00:10:48.724 13:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.724 13:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.290 [2024-10-01 13:44:59.218984] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.290 [2024-10-01 13:44:59.220158] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.290 [2024-10-01 13:44:59.223110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.290 [2024-10-01 13:44:59.223321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.290 [2024-10-01 13:44:59.223443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:10:49.290 "results": [ 00:10:49.290 { 00:10:49.290 "job": "raid_bdev1", 00:10:49.290 "core_mask": "0x1", 00:10:49.290 "workload": "randrw", 00:10:49.290 "percentage": 50, 00:10:49.290 "status": "finished", 00:10:49.290 "queue_depth": 1, 00:10:49.290 "io_size": 131072, 00:10:49.290 "runtime": 1.379017, 00:10:49.290 "iops": 20028.034462229254, 00:10:49.290 "mibps": 2503.5043077786568, 00:10:49.290 "io_failed": 0, 00:10:49.290 "io_timeout": 0, 00:10:49.290 "avg_latency_us": 47.023130779390414, 00:10:49.290 "min_latency_us": 25.188755020080322, 00:10:49.290 "max_latency_us": 1552.8610441767069 00:10:49.290 } 00:10:49.290 ], 00:10:49.290 "core_count": 1 00:10:49.290 } 00:10:49.290 ee all in destruct 00:10:49.290 [2024-10-01 13:44:59.223675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63569 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63569 ']' 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63569 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.290 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63569 00:10:49.290 killing process with pid 63569 00:10:49.291 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.291 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.291 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63569' 00:10:49.291 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63569 00:10:49.291 [2024-10-01 13:44:59.277615] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.291 13:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63569 00:10:49.291 [2024-10-01 13:44:59.422914] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.672 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cXXCZ2B3MS 00:10:50.672 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:50.672 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:50.938 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:50.938 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:50.938 ************************************ 00:10:50.938 END TEST raid_write_error_test 00:10:50.938 ************************************ 00:10:50.938 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.938 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:50.938 13:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:50.938 00:10:50.938 real 0m4.743s 00:10:50.938 user 0m5.570s 00:10:50.938 sys 0m0.622s 00:10:50.938 13:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.938 13:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 13:45:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:50.938 13:45:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:50.938 13:45:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:50.938 13:45:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:50.938 13:45:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.938 13:45:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 ************************************ 00:10:50.938 START TEST raid_state_function_test 00:10:50.938 ************************************ 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:50.938 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:50.939 Process raid pid: 63718 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63718 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63718' 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63718 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63718 ']' 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.939 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 [2024-10-01 13:45:01.053826] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:50.939 [2024-10-01 13:45:01.053968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.206 [2024-10-01 13:45:01.229432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.475 [2024-10-01 13:45:01.468601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.746 [2024-10-01 13:45:01.699843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.746 [2024-10-01 13:45:01.700111] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.008 [2024-10-01 13:45:01.979446] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.008 [2024-10-01 13:45:01.979508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.008 [2024-10-01 13:45:01.979523] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.008 [2024-10-01 13:45:01.979538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.008 [2024-10-01 13:45:01.979546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.008 [2024-10-01 13:45:01.979561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.008 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.009 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.009 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.009 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.009 "name": "Existed_Raid", 00:10:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.009 "strip_size_kb": 64, 00:10:52.009 "state": "configuring", 00:10:52.009 "raid_level": "raid0", 00:10:52.009 "superblock": false, 00:10:52.009 "num_base_bdevs": 3, 00:10:52.009 "num_base_bdevs_discovered": 0, 00:10:52.009 "num_base_bdevs_operational": 3, 00:10:52.009 "base_bdevs_list": [ 00:10:52.009 { 00:10:52.009 "name": "BaseBdev1", 00:10:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.009 "is_configured": false, 00:10:52.009 "data_offset": 0, 00:10:52.009 "data_size": 0 00:10:52.009 }, 00:10:52.009 { 00:10:52.009 "name": "BaseBdev2", 00:10:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.009 "is_configured": false, 00:10:52.009 "data_offset": 0, 00:10:52.009 "data_size": 0 00:10:52.009 }, 00:10:52.009 { 00:10:52.009 "name": "BaseBdev3", 00:10:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.009 "is_configured": false, 00:10:52.009 "data_offset": 0, 00:10:52.009 "data_size": 0 00:10:52.009 } 00:10:52.009 ] 00:10:52.009 }' 00:10:52.009 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.009 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.267 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.267 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.267 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.267 [2024-10-01 13:45:02.455388] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.267 [2024-10-01 13:45:02.455574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:52.525 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.526 [2024-10-01 13:45:02.463415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.526 [2024-10-01 13:45:02.463466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.526 [2024-10-01 13:45:02.463478] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.526 [2024-10-01 13:45:02.463492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.526 [2024-10-01 13:45:02.463501] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.526 [2024-10-01 13:45:02.463514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.526 [2024-10-01 13:45:02.522435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.526 BaseBdev1 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.526 [ 00:10:52.526 { 00:10:52.526 "name": "BaseBdev1", 00:10:52.526 "aliases": [ 00:10:52.526 "07ce1868-8484-4ee8-b90c-c35feb8bf99d" 00:10:52.526 ], 00:10:52.526 "product_name": "Malloc disk", 00:10:52.526 "block_size": 512, 00:10:52.526 "num_blocks": 65536, 00:10:52.526 "uuid": "07ce1868-8484-4ee8-b90c-c35feb8bf99d", 00:10:52.526 "assigned_rate_limits": { 00:10:52.526 "rw_ios_per_sec": 0, 00:10:52.526 "rw_mbytes_per_sec": 0, 00:10:52.526 "r_mbytes_per_sec": 0, 00:10:52.526 "w_mbytes_per_sec": 0 00:10:52.526 }, 00:10:52.526 "claimed": true, 00:10:52.526 "claim_type": "exclusive_write", 00:10:52.526 "zoned": false, 00:10:52.526 "supported_io_types": { 00:10:52.526 "read": true, 00:10:52.526 "write": true, 00:10:52.526 "unmap": true, 00:10:52.526 "flush": true, 00:10:52.526 "reset": true, 00:10:52.526 "nvme_admin": false, 00:10:52.526 "nvme_io": false, 00:10:52.526 "nvme_io_md": false, 00:10:52.526 "write_zeroes": true, 00:10:52.526 "zcopy": true, 00:10:52.526 "get_zone_info": false, 00:10:52.526 "zone_management": false, 00:10:52.526 "zone_append": false, 00:10:52.526 "compare": false, 00:10:52.526 "compare_and_write": false, 00:10:52.526 "abort": true, 00:10:52.526 "seek_hole": false, 00:10:52.526 "seek_data": false, 00:10:52.526 "copy": true, 00:10:52.526 "nvme_iov_md": false 00:10:52.526 }, 00:10:52.526 "memory_domains": [ 00:10:52.526 { 00:10:52.526 "dma_device_id": "system", 00:10:52.526 "dma_device_type": 1 00:10:52.526 }, 00:10:52.526 { 00:10:52.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.526 "dma_device_type": 2 00:10:52.526 } 00:10:52.526 ], 00:10:52.526 "driver_specific": {} 00:10:52.526 } 00:10:52.526 ] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.526 "name": "Existed_Raid", 00:10:52.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.526 "strip_size_kb": 64, 00:10:52.526 "state": "configuring", 00:10:52.526 "raid_level": "raid0", 00:10:52.526 "superblock": false, 00:10:52.526 "num_base_bdevs": 3, 00:10:52.526 "num_base_bdevs_discovered": 1, 00:10:52.526 "num_base_bdevs_operational": 3, 00:10:52.526 "base_bdevs_list": [ 00:10:52.526 { 00:10:52.526 "name": "BaseBdev1", 00:10:52.526 "uuid": "07ce1868-8484-4ee8-b90c-c35feb8bf99d", 00:10:52.526 "is_configured": true, 00:10:52.526 "data_offset": 0, 00:10:52.526 "data_size": 65536 00:10:52.526 }, 00:10:52.526 { 00:10:52.526 "name": "BaseBdev2", 00:10:52.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.526 "is_configured": false, 00:10:52.526 "data_offset": 0, 00:10:52.526 "data_size": 0 00:10:52.526 }, 00:10:52.526 { 00:10:52.526 "name": "BaseBdev3", 00:10:52.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.526 "is_configured": false, 00:10:52.526 "data_offset": 0, 00:10:52.526 "data_size": 0 00:10:52.526 } 00:10:52.526 ] 00:10:52.526 }' 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.526 13:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 [2024-10-01 13:45:03.033761] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.093 [2024-10-01 13:45:03.033956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 [2024-10-01 13:45:03.045792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.093 [2024-10-01 13:45:03.048224] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.093 [2024-10-01 13:45:03.048290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.093 [2024-10-01 13:45:03.048306] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.093 [2024-10-01 13:45:03.048323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.093 "name": "Existed_Raid", 00:10:53.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.093 "strip_size_kb": 64, 00:10:53.093 "state": "configuring", 00:10:53.093 "raid_level": "raid0", 00:10:53.093 "superblock": false, 00:10:53.093 "num_base_bdevs": 3, 00:10:53.093 "num_base_bdevs_discovered": 1, 00:10:53.093 "num_base_bdevs_operational": 3, 00:10:53.093 "base_bdevs_list": [ 00:10:53.093 { 00:10:53.093 "name": "BaseBdev1", 00:10:53.093 "uuid": "07ce1868-8484-4ee8-b90c-c35feb8bf99d", 00:10:53.093 "is_configured": true, 00:10:53.093 "data_offset": 0, 00:10:53.093 "data_size": 65536 00:10:53.093 }, 00:10:53.093 { 00:10:53.093 "name": "BaseBdev2", 00:10:53.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.093 "is_configured": false, 00:10:53.093 "data_offset": 0, 00:10:53.093 "data_size": 0 00:10:53.093 }, 00:10:53.093 { 00:10:53.093 "name": "BaseBdev3", 00:10:53.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.093 "is_configured": false, 00:10:53.093 "data_offset": 0, 00:10:53.093 "data_size": 0 00:10:53.093 } 00:10:53.093 ] 00:10:53.093 }' 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.093 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.352 [2024-10-01 13:45:03.460665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.352 BaseBdev2 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.352 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.352 [ 00:10:53.352 { 00:10:53.352 "name": "BaseBdev2", 00:10:53.352 "aliases": [ 00:10:53.352 "d99ec2af-265b-4af1-93e6-94fc9483c282" 00:10:53.352 ], 00:10:53.352 "product_name": "Malloc disk", 00:10:53.352 "block_size": 512, 00:10:53.352 "num_blocks": 65536, 00:10:53.352 "uuid": "d99ec2af-265b-4af1-93e6-94fc9483c282", 00:10:53.352 "assigned_rate_limits": { 00:10:53.352 "rw_ios_per_sec": 0, 00:10:53.352 "rw_mbytes_per_sec": 0, 00:10:53.352 "r_mbytes_per_sec": 0, 00:10:53.352 "w_mbytes_per_sec": 0 00:10:53.352 }, 00:10:53.352 "claimed": true, 00:10:53.352 "claim_type": "exclusive_write", 00:10:53.352 "zoned": false, 00:10:53.352 "supported_io_types": { 00:10:53.352 "read": true, 00:10:53.352 "write": true, 00:10:53.352 "unmap": true, 00:10:53.352 "flush": true, 00:10:53.352 "reset": true, 00:10:53.352 "nvme_admin": false, 00:10:53.352 "nvme_io": false, 00:10:53.352 "nvme_io_md": false, 00:10:53.352 "write_zeroes": true, 00:10:53.352 "zcopy": true, 00:10:53.352 "get_zone_info": false, 00:10:53.352 "zone_management": false, 00:10:53.352 "zone_append": false, 00:10:53.352 "compare": false, 00:10:53.352 "compare_and_write": false, 00:10:53.352 "abort": true, 00:10:53.352 "seek_hole": false, 00:10:53.352 "seek_data": false, 00:10:53.352 "copy": true, 00:10:53.352 "nvme_iov_md": false 00:10:53.352 }, 00:10:53.352 "memory_domains": [ 00:10:53.352 { 00:10:53.352 "dma_device_id": "system", 00:10:53.352 "dma_device_type": 1 00:10:53.352 }, 00:10:53.352 { 00:10:53.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.353 "dma_device_type": 2 00:10:53.353 } 00:10:53.353 ], 00:10:53.353 "driver_specific": {} 00:10:53.353 } 00:10:53.353 ] 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.353 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.611 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.611 "name": "Existed_Raid", 00:10:53.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.611 "strip_size_kb": 64, 00:10:53.611 "state": "configuring", 00:10:53.611 "raid_level": "raid0", 00:10:53.611 "superblock": false, 00:10:53.611 "num_base_bdevs": 3, 00:10:53.611 "num_base_bdevs_discovered": 2, 00:10:53.611 "num_base_bdevs_operational": 3, 00:10:53.611 "base_bdevs_list": [ 00:10:53.611 { 00:10:53.611 "name": "BaseBdev1", 00:10:53.611 "uuid": "07ce1868-8484-4ee8-b90c-c35feb8bf99d", 00:10:53.611 "is_configured": true, 00:10:53.611 "data_offset": 0, 00:10:53.611 "data_size": 65536 00:10:53.611 }, 00:10:53.611 { 00:10:53.611 "name": "BaseBdev2", 00:10:53.611 "uuid": "d99ec2af-265b-4af1-93e6-94fc9483c282", 00:10:53.611 "is_configured": true, 00:10:53.611 "data_offset": 0, 00:10:53.611 "data_size": 65536 00:10:53.611 }, 00:10:53.611 { 00:10:53.611 "name": "BaseBdev3", 00:10:53.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.611 "is_configured": false, 00:10:53.611 "data_offset": 0, 00:10:53.611 "data_size": 0 00:10:53.611 } 00:10:53.611 ] 00:10:53.611 }' 00:10:53.611 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.611 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.872 [2024-10-01 13:45:03.978676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.872 [2024-10-01 13:45:03.978716] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:53.872 [2024-10-01 13:45:03.978738] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:53.872 [2024-10-01 13:45:03.979042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:53.872 [2024-10-01 13:45:03.979231] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:53.872 [2024-10-01 13:45:03.979246] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:53.872 BaseBdev3 00:10:53.872 [2024-10-01 13:45:03.979552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.872 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.872 [ 00:10:53.872 { 00:10:53.872 "name": "BaseBdev3", 00:10:53.872 "aliases": [ 00:10:53.872 "8b01f264-05af-420d-96b7-9f5d7405556e" 00:10:53.872 ], 00:10:53.872 "product_name": "Malloc disk", 00:10:53.872 "block_size": 512, 00:10:53.872 "num_blocks": 65536, 00:10:53.872 "uuid": "8b01f264-05af-420d-96b7-9f5d7405556e", 00:10:53.872 "assigned_rate_limits": { 00:10:53.872 "rw_ios_per_sec": 0, 00:10:53.872 "rw_mbytes_per_sec": 0, 00:10:53.872 "r_mbytes_per_sec": 0, 00:10:53.872 "w_mbytes_per_sec": 0 00:10:53.872 }, 00:10:53.872 "claimed": true, 00:10:53.872 "claim_type": "exclusive_write", 00:10:53.872 "zoned": false, 00:10:53.872 "supported_io_types": { 00:10:53.872 "read": true, 00:10:53.872 "write": true, 00:10:53.872 "unmap": true, 00:10:53.872 "flush": true, 00:10:53.872 "reset": true, 00:10:53.872 "nvme_admin": false, 00:10:53.872 "nvme_io": false, 00:10:53.872 "nvme_io_md": false, 00:10:53.872 "write_zeroes": true, 00:10:53.872 "zcopy": true, 00:10:53.872 "get_zone_info": false, 00:10:53.872 "zone_management": false, 00:10:53.872 "zone_append": false, 00:10:53.872 "compare": false, 00:10:53.872 "compare_and_write": false, 00:10:53.872 "abort": true, 00:10:53.872 "seek_hole": false, 00:10:53.872 "seek_data": false, 00:10:53.872 "copy": true, 00:10:53.872 "nvme_iov_md": false 00:10:53.872 }, 00:10:53.872 "memory_domains": [ 00:10:53.872 { 00:10:53.872 "dma_device_id": "system", 00:10:53.872 "dma_device_type": 1 00:10:53.872 }, 00:10:53.872 { 00:10:53.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.872 "dma_device_type": 2 00:10:53.872 } 00:10:53.872 ], 00:10:53.872 "driver_specific": {} 00:10:53.872 } 00:10:53.872 ] 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.872 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.873 "name": "Existed_Raid", 00:10:53.873 "uuid": "6c6384a2-18a1-440c-a1e1-428f9ae4f0a4", 00:10:53.873 "strip_size_kb": 64, 00:10:53.873 "state": "online", 00:10:53.873 "raid_level": "raid0", 00:10:53.873 "superblock": false, 00:10:53.873 "num_base_bdevs": 3, 00:10:53.873 "num_base_bdevs_discovered": 3, 00:10:53.873 "num_base_bdevs_operational": 3, 00:10:53.873 "base_bdevs_list": [ 00:10:53.873 { 00:10:53.873 "name": "BaseBdev1", 00:10:53.873 "uuid": "07ce1868-8484-4ee8-b90c-c35feb8bf99d", 00:10:53.873 "is_configured": true, 00:10:53.873 "data_offset": 0, 00:10:53.873 "data_size": 65536 00:10:53.873 }, 00:10:53.873 { 00:10:53.873 "name": "BaseBdev2", 00:10:53.873 "uuid": "d99ec2af-265b-4af1-93e6-94fc9483c282", 00:10:53.873 "is_configured": true, 00:10:53.873 "data_offset": 0, 00:10:53.873 "data_size": 65536 00:10:53.873 }, 00:10:53.873 { 00:10:53.873 "name": "BaseBdev3", 00:10:53.873 "uuid": "8b01f264-05af-420d-96b7-9f5d7405556e", 00:10:53.873 "is_configured": true, 00:10:53.873 "data_offset": 0, 00:10:53.873 "data_size": 65536 00:10:53.873 } 00:10:53.873 ] 00:10:53.873 }' 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.873 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.448 [2024-10-01 13:45:04.470882] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.448 "name": "Existed_Raid", 00:10:54.448 "aliases": [ 00:10:54.448 "6c6384a2-18a1-440c-a1e1-428f9ae4f0a4" 00:10:54.448 ], 00:10:54.448 "product_name": "Raid Volume", 00:10:54.448 "block_size": 512, 00:10:54.448 "num_blocks": 196608, 00:10:54.448 "uuid": "6c6384a2-18a1-440c-a1e1-428f9ae4f0a4", 00:10:54.448 "assigned_rate_limits": { 00:10:54.448 "rw_ios_per_sec": 0, 00:10:54.448 "rw_mbytes_per_sec": 0, 00:10:54.448 "r_mbytes_per_sec": 0, 00:10:54.448 "w_mbytes_per_sec": 0 00:10:54.448 }, 00:10:54.448 "claimed": false, 00:10:54.448 "zoned": false, 00:10:54.448 "supported_io_types": { 00:10:54.448 "read": true, 00:10:54.448 "write": true, 00:10:54.448 "unmap": true, 00:10:54.448 "flush": true, 00:10:54.448 "reset": true, 00:10:54.448 "nvme_admin": false, 00:10:54.448 "nvme_io": false, 00:10:54.448 "nvme_io_md": false, 00:10:54.448 "write_zeroes": true, 00:10:54.448 "zcopy": false, 00:10:54.448 "get_zone_info": false, 00:10:54.448 "zone_management": false, 00:10:54.448 "zone_append": false, 00:10:54.448 "compare": false, 00:10:54.448 "compare_and_write": false, 00:10:54.448 "abort": false, 00:10:54.448 "seek_hole": false, 00:10:54.448 "seek_data": false, 00:10:54.448 "copy": false, 00:10:54.448 "nvme_iov_md": false 00:10:54.448 }, 00:10:54.448 "memory_domains": [ 00:10:54.448 { 00:10:54.448 "dma_device_id": "system", 00:10:54.448 "dma_device_type": 1 00:10:54.448 }, 00:10:54.448 { 00:10:54.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.448 "dma_device_type": 2 00:10:54.448 }, 00:10:54.448 { 00:10:54.448 "dma_device_id": "system", 00:10:54.448 "dma_device_type": 1 00:10:54.448 }, 00:10:54.448 { 00:10:54.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.448 "dma_device_type": 2 00:10:54.448 }, 00:10:54.448 { 00:10:54.448 "dma_device_id": "system", 00:10:54.448 "dma_device_type": 1 00:10:54.448 }, 00:10:54.448 { 00:10:54.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.448 "dma_device_type": 2 00:10:54.448 } 00:10:54.448 ], 00:10:54.448 "driver_specific": { 00:10:54.448 "raid": { 00:10:54.448 "uuid": "6c6384a2-18a1-440c-a1e1-428f9ae4f0a4", 00:10:54.448 "strip_size_kb": 64, 00:10:54.448 "state": "online", 00:10:54.448 "raid_level": "raid0", 00:10:54.448 "superblock": false, 00:10:54.448 "num_base_bdevs": 3, 00:10:54.448 "num_base_bdevs_discovered": 3, 00:10:54.448 "num_base_bdevs_operational": 3, 00:10:54.448 "base_bdevs_list": [ 00:10:54.448 { 00:10:54.448 "name": "BaseBdev1", 00:10:54.448 "uuid": "07ce1868-8484-4ee8-b90c-c35feb8bf99d", 00:10:54.448 "is_configured": true, 00:10:54.448 "data_offset": 0, 00:10:54.448 "data_size": 65536 00:10:54.448 }, 00:10:54.448 { 00:10:54.448 "name": "BaseBdev2", 00:10:54.448 "uuid": "d99ec2af-265b-4af1-93e6-94fc9483c282", 00:10:54.448 "is_configured": true, 00:10:54.448 "data_offset": 0, 00:10:54.448 "data_size": 65536 00:10:54.448 }, 00:10:54.448 { 00:10:54.448 "name": "BaseBdev3", 00:10:54.448 "uuid": "8b01f264-05af-420d-96b7-9f5d7405556e", 00:10:54.448 "is_configured": true, 00:10:54.448 "data_offset": 0, 00:10:54.448 "data_size": 65536 00:10:54.448 } 00:10:54.448 ] 00:10:54.448 } 00:10:54.448 } 00:10:54.448 }' 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:54.448 BaseBdev2 00:10:54.448 BaseBdev3' 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.448 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.707 [2024-10-01 13:45:04.750342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.707 [2024-10-01 13:45:04.750487] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.707 [2024-10-01 13:45:04.750625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.707 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.967 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.967 "name": "Existed_Raid", 00:10:54.967 "uuid": "6c6384a2-18a1-440c-a1e1-428f9ae4f0a4", 00:10:54.967 "strip_size_kb": 64, 00:10:54.967 "state": "offline", 00:10:54.967 "raid_level": "raid0", 00:10:54.967 "superblock": false, 00:10:54.967 "num_base_bdevs": 3, 00:10:54.967 "num_base_bdevs_discovered": 2, 00:10:54.967 "num_base_bdevs_operational": 2, 00:10:54.967 "base_bdevs_list": [ 00:10:54.967 { 00:10:54.967 "name": null, 00:10:54.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.967 "is_configured": false, 00:10:54.967 "data_offset": 0, 00:10:54.967 "data_size": 65536 00:10:54.967 }, 00:10:54.967 { 00:10:54.967 "name": "BaseBdev2", 00:10:54.967 "uuid": "d99ec2af-265b-4af1-93e6-94fc9483c282", 00:10:54.967 "is_configured": true, 00:10:54.967 "data_offset": 0, 00:10:54.967 "data_size": 65536 00:10:54.967 }, 00:10:54.967 { 00:10:54.967 "name": "BaseBdev3", 00:10:54.967 "uuid": "8b01f264-05af-420d-96b7-9f5d7405556e", 00:10:54.967 "is_configured": true, 00:10:54.967 "data_offset": 0, 00:10:54.967 "data_size": 65536 00:10:54.967 } 00:10:54.967 ] 00:10:54.967 }' 00:10:54.967 13:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.967 13:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.226 [2024-10-01 13:45:05.306295] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.226 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 [2024-10-01 13:45:05.459109] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.485 [2024-10-01 13:45:05.459309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.485 BaseBdev2 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:55.485 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.486 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.744 [ 00:10:55.744 { 00:10:55.744 "name": "BaseBdev2", 00:10:55.744 "aliases": [ 00:10:55.744 "93502cb6-59ad-458a-aac5-37380a509cd6" 00:10:55.744 ], 00:10:55.744 "product_name": "Malloc disk", 00:10:55.744 "block_size": 512, 00:10:55.744 "num_blocks": 65536, 00:10:55.744 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:55.744 "assigned_rate_limits": { 00:10:55.744 "rw_ios_per_sec": 0, 00:10:55.744 "rw_mbytes_per_sec": 0, 00:10:55.744 "r_mbytes_per_sec": 0, 00:10:55.744 "w_mbytes_per_sec": 0 00:10:55.744 }, 00:10:55.744 "claimed": false, 00:10:55.744 "zoned": false, 00:10:55.744 "supported_io_types": { 00:10:55.744 "read": true, 00:10:55.744 "write": true, 00:10:55.744 "unmap": true, 00:10:55.744 "flush": true, 00:10:55.744 "reset": true, 00:10:55.744 "nvme_admin": false, 00:10:55.744 "nvme_io": false, 00:10:55.744 "nvme_io_md": false, 00:10:55.744 "write_zeroes": true, 00:10:55.744 "zcopy": true, 00:10:55.744 "get_zone_info": false, 00:10:55.744 "zone_management": false, 00:10:55.744 "zone_append": false, 00:10:55.744 "compare": false, 00:10:55.744 "compare_and_write": false, 00:10:55.744 "abort": true, 00:10:55.744 "seek_hole": false, 00:10:55.744 "seek_data": false, 00:10:55.744 "copy": true, 00:10:55.744 "nvme_iov_md": false 00:10:55.744 }, 00:10:55.744 "memory_domains": [ 00:10:55.744 { 00:10:55.744 "dma_device_id": "system", 00:10:55.744 "dma_device_type": 1 00:10:55.744 }, 00:10:55.744 { 00:10:55.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.744 "dma_device_type": 2 00:10:55.744 } 00:10:55.744 ], 00:10:55.744 "driver_specific": {} 00:10:55.744 } 00:10:55.744 ] 00:10:55.744 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.744 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.744 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 BaseBdev3 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 [ 00:10:55.745 { 00:10:55.745 "name": "BaseBdev3", 00:10:55.745 "aliases": [ 00:10:55.745 "db3e08b3-4ed6-4619-88a2-f774de3ed6ee" 00:10:55.745 ], 00:10:55.745 "product_name": "Malloc disk", 00:10:55.745 "block_size": 512, 00:10:55.745 "num_blocks": 65536, 00:10:55.745 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:55.745 "assigned_rate_limits": { 00:10:55.745 "rw_ios_per_sec": 0, 00:10:55.745 "rw_mbytes_per_sec": 0, 00:10:55.745 "r_mbytes_per_sec": 0, 00:10:55.745 "w_mbytes_per_sec": 0 00:10:55.745 }, 00:10:55.745 "claimed": false, 00:10:55.745 "zoned": false, 00:10:55.745 "supported_io_types": { 00:10:55.745 "read": true, 00:10:55.745 "write": true, 00:10:55.745 "unmap": true, 00:10:55.745 "flush": true, 00:10:55.745 "reset": true, 00:10:55.745 "nvme_admin": false, 00:10:55.745 "nvme_io": false, 00:10:55.745 "nvme_io_md": false, 00:10:55.745 "write_zeroes": true, 00:10:55.745 "zcopy": true, 00:10:55.745 "get_zone_info": false, 00:10:55.745 "zone_management": false, 00:10:55.745 "zone_append": false, 00:10:55.745 "compare": false, 00:10:55.745 "compare_and_write": false, 00:10:55.745 "abort": true, 00:10:55.745 "seek_hole": false, 00:10:55.745 "seek_data": false, 00:10:55.745 "copy": true, 00:10:55.745 "nvme_iov_md": false 00:10:55.745 }, 00:10:55.745 "memory_domains": [ 00:10:55.745 { 00:10:55.745 "dma_device_id": "system", 00:10:55.745 "dma_device_type": 1 00:10:55.745 }, 00:10:55.745 { 00:10:55.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.745 "dma_device_type": 2 00:10:55.745 } 00:10:55.745 ], 00:10:55.745 "driver_specific": {} 00:10:55.745 } 00:10:55.745 ] 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 [2024-10-01 13:45:05.796851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.745 [2024-10-01 13:45:05.797018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.745 [2024-10-01 13:45:05.797149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.745 [2024-10-01 13:45:05.799456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.745 "name": "Existed_Raid", 00:10:55.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.745 "strip_size_kb": 64, 00:10:55.745 "state": "configuring", 00:10:55.745 "raid_level": "raid0", 00:10:55.745 "superblock": false, 00:10:55.745 "num_base_bdevs": 3, 00:10:55.745 "num_base_bdevs_discovered": 2, 00:10:55.745 "num_base_bdevs_operational": 3, 00:10:55.745 "base_bdevs_list": [ 00:10:55.745 { 00:10:55.745 "name": "BaseBdev1", 00:10:55.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.745 "is_configured": false, 00:10:55.745 "data_offset": 0, 00:10:55.745 "data_size": 0 00:10:55.745 }, 00:10:55.745 { 00:10:55.745 "name": "BaseBdev2", 00:10:55.745 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:55.745 "is_configured": true, 00:10:55.745 "data_offset": 0, 00:10:55.745 "data_size": 65536 00:10:55.745 }, 00:10:55.745 { 00:10:55.745 "name": "BaseBdev3", 00:10:55.745 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:55.745 "is_configured": true, 00:10:55.745 "data_offset": 0, 00:10:55.745 "data_size": 65536 00:10:55.745 } 00:10:55.745 ] 00:10:55.745 }' 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.745 13:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.312 [2024-10-01 13:45:06.260225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.312 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.313 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.313 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.313 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.313 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.313 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.313 "name": "Existed_Raid", 00:10:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.313 "strip_size_kb": 64, 00:10:56.313 "state": "configuring", 00:10:56.313 "raid_level": "raid0", 00:10:56.313 "superblock": false, 00:10:56.313 "num_base_bdevs": 3, 00:10:56.313 "num_base_bdevs_discovered": 1, 00:10:56.313 "num_base_bdevs_operational": 3, 00:10:56.313 "base_bdevs_list": [ 00:10:56.313 { 00:10:56.313 "name": "BaseBdev1", 00:10:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.313 "is_configured": false, 00:10:56.313 "data_offset": 0, 00:10:56.313 "data_size": 0 00:10:56.313 }, 00:10:56.313 { 00:10:56.313 "name": null, 00:10:56.313 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:56.313 "is_configured": false, 00:10:56.313 "data_offset": 0, 00:10:56.313 "data_size": 65536 00:10:56.313 }, 00:10:56.313 { 00:10:56.313 "name": "BaseBdev3", 00:10:56.313 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:56.313 "is_configured": true, 00:10:56.313 "data_offset": 0, 00:10:56.313 "data_size": 65536 00:10:56.313 } 00:10:56.313 ] 00:10:56.313 }' 00:10:56.313 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.313 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 [2024-10-01 13:45:06.738090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.573 BaseBdev1 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.573 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 [ 00:10:56.832 { 00:10:56.832 "name": "BaseBdev1", 00:10:56.832 "aliases": [ 00:10:56.832 "335767a0-2bc2-4c0a-9555-4ba93a91633a" 00:10:56.832 ], 00:10:56.832 "product_name": "Malloc disk", 00:10:56.832 "block_size": 512, 00:10:56.832 "num_blocks": 65536, 00:10:56.832 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:56.832 "assigned_rate_limits": { 00:10:56.832 "rw_ios_per_sec": 0, 00:10:56.832 "rw_mbytes_per_sec": 0, 00:10:56.832 "r_mbytes_per_sec": 0, 00:10:56.832 "w_mbytes_per_sec": 0 00:10:56.832 }, 00:10:56.832 "claimed": true, 00:10:56.832 "claim_type": "exclusive_write", 00:10:56.832 "zoned": false, 00:10:56.832 "supported_io_types": { 00:10:56.832 "read": true, 00:10:56.832 "write": true, 00:10:56.832 "unmap": true, 00:10:56.832 "flush": true, 00:10:56.832 "reset": true, 00:10:56.832 "nvme_admin": false, 00:10:56.832 "nvme_io": false, 00:10:56.832 "nvme_io_md": false, 00:10:56.832 "write_zeroes": true, 00:10:56.832 "zcopy": true, 00:10:56.832 "get_zone_info": false, 00:10:56.832 "zone_management": false, 00:10:56.832 "zone_append": false, 00:10:56.832 "compare": false, 00:10:56.832 "compare_and_write": false, 00:10:56.832 "abort": true, 00:10:56.832 "seek_hole": false, 00:10:56.832 "seek_data": false, 00:10:56.832 "copy": true, 00:10:56.832 "nvme_iov_md": false 00:10:56.832 }, 00:10:56.832 "memory_domains": [ 00:10:56.832 { 00:10:56.832 "dma_device_id": "system", 00:10:56.832 "dma_device_type": 1 00:10:56.832 }, 00:10:56.832 { 00:10:56.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.832 "dma_device_type": 2 00:10:56.832 } 00:10:56.832 ], 00:10:56.832 "driver_specific": {} 00:10:56.832 } 00:10:56.832 ] 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.832 "name": "Existed_Raid", 00:10:56.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.832 "strip_size_kb": 64, 00:10:56.832 "state": "configuring", 00:10:56.832 "raid_level": "raid0", 00:10:56.832 "superblock": false, 00:10:56.832 "num_base_bdevs": 3, 00:10:56.832 "num_base_bdevs_discovered": 2, 00:10:56.832 "num_base_bdevs_operational": 3, 00:10:56.832 "base_bdevs_list": [ 00:10:56.832 { 00:10:56.832 "name": "BaseBdev1", 00:10:56.832 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:56.832 "is_configured": true, 00:10:56.832 "data_offset": 0, 00:10:56.832 "data_size": 65536 00:10:56.832 }, 00:10:56.832 { 00:10:56.832 "name": null, 00:10:56.832 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:56.832 "is_configured": false, 00:10:56.832 "data_offset": 0, 00:10:56.832 "data_size": 65536 00:10:56.832 }, 00:10:56.832 { 00:10:56.832 "name": "BaseBdev3", 00:10:56.832 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:56.832 "is_configured": true, 00:10:56.832 "data_offset": 0, 00:10:56.832 "data_size": 65536 00:10:56.832 } 00:10:56.832 ] 00:10:56.832 }' 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.832 13:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.090 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.090 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.090 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.090 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:57.090 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 [2024-10-01 13:45:07.289445] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.348 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.348 "name": "Existed_Raid", 00:10:57.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.348 "strip_size_kb": 64, 00:10:57.348 "state": "configuring", 00:10:57.348 "raid_level": "raid0", 00:10:57.348 "superblock": false, 00:10:57.348 "num_base_bdevs": 3, 00:10:57.348 "num_base_bdevs_discovered": 1, 00:10:57.348 "num_base_bdevs_operational": 3, 00:10:57.348 "base_bdevs_list": [ 00:10:57.348 { 00:10:57.348 "name": "BaseBdev1", 00:10:57.348 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:57.348 "is_configured": true, 00:10:57.348 "data_offset": 0, 00:10:57.348 "data_size": 65536 00:10:57.348 }, 00:10:57.348 { 00:10:57.348 "name": null, 00:10:57.348 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:57.348 "is_configured": false, 00:10:57.348 "data_offset": 0, 00:10:57.348 "data_size": 65536 00:10:57.348 }, 00:10:57.348 { 00:10:57.348 "name": null, 00:10:57.348 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:57.348 "is_configured": false, 00:10:57.348 "data_offset": 0, 00:10:57.349 "data_size": 65536 00:10:57.349 } 00:10:57.349 ] 00:10:57.349 }' 00:10:57.349 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.349 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.608 [2024-10-01 13:45:07.760755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.608 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.609 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.868 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.868 "name": "Existed_Raid", 00:10:57.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.868 "strip_size_kb": 64, 00:10:57.868 "state": "configuring", 00:10:57.868 "raid_level": "raid0", 00:10:57.868 "superblock": false, 00:10:57.868 "num_base_bdevs": 3, 00:10:57.868 "num_base_bdevs_discovered": 2, 00:10:57.868 "num_base_bdevs_operational": 3, 00:10:57.868 "base_bdevs_list": [ 00:10:57.868 { 00:10:57.868 "name": "BaseBdev1", 00:10:57.868 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:57.868 "is_configured": true, 00:10:57.868 "data_offset": 0, 00:10:57.868 "data_size": 65536 00:10:57.868 }, 00:10:57.868 { 00:10:57.868 "name": null, 00:10:57.868 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:57.868 "is_configured": false, 00:10:57.868 "data_offset": 0, 00:10:57.868 "data_size": 65536 00:10:57.868 }, 00:10:57.868 { 00:10:57.868 "name": "BaseBdev3", 00:10:57.868 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:57.868 "is_configured": true, 00:10:57.868 "data_offset": 0, 00:10:57.868 "data_size": 65536 00:10:57.868 } 00:10:57.868 ] 00:10:57.868 }' 00:10:57.868 13:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.868 13:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.127 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.127 [2024-10-01 13:45:08.228178] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.386 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.386 "name": "Existed_Raid", 00:10:58.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.386 "strip_size_kb": 64, 00:10:58.386 "state": "configuring", 00:10:58.386 "raid_level": "raid0", 00:10:58.386 "superblock": false, 00:10:58.386 "num_base_bdevs": 3, 00:10:58.386 "num_base_bdevs_discovered": 1, 00:10:58.386 "num_base_bdevs_operational": 3, 00:10:58.386 "base_bdevs_list": [ 00:10:58.386 { 00:10:58.386 "name": null, 00:10:58.386 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:58.386 "is_configured": false, 00:10:58.386 "data_offset": 0, 00:10:58.386 "data_size": 65536 00:10:58.386 }, 00:10:58.386 { 00:10:58.386 "name": null, 00:10:58.386 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:58.386 "is_configured": false, 00:10:58.386 "data_offset": 0, 00:10:58.386 "data_size": 65536 00:10:58.387 }, 00:10:58.387 { 00:10:58.387 "name": "BaseBdev3", 00:10:58.387 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:58.387 "is_configured": true, 00:10:58.387 "data_offset": 0, 00:10:58.387 "data_size": 65536 00:10:58.387 } 00:10:58.387 ] 00:10:58.387 }' 00:10:58.387 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.387 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.645 [2024-10-01 13:45:08.799448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.645 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.904 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.904 "name": "Existed_Raid", 00:10:58.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.904 "strip_size_kb": 64, 00:10:58.904 "state": "configuring", 00:10:58.904 "raid_level": "raid0", 00:10:58.904 "superblock": false, 00:10:58.904 "num_base_bdevs": 3, 00:10:58.904 "num_base_bdevs_discovered": 2, 00:10:58.904 "num_base_bdevs_operational": 3, 00:10:58.904 "base_bdevs_list": [ 00:10:58.904 { 00:10:58.904 "name": null, 00:10:58.904 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:58.904 "is_configured": false, 00:10:58.904 "data_offset": 0, 00:10:58.904 "data_size": 65536 00:10:58.904 }, 00:10:58.904 { 00:10:58.904 "name": "BaseBdev2", 00:10:58.904 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:58.904 "is_configured": true, 00:10:58.904 "data_offset": 0, 00:10:58.904 "data_size": 65536 00:10:58.904 }, 00:10:58.904 { 00:10:58.904 "name": "BaseBdev3", 00:10:58.904 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:58.904 "is_configured": true, 00:10:58.904 "data_offset": 0, 00:10:58.904 "data_size": 65536 00:10:58.904 } 00:10:58.904 ] 00:10:58.904 }' 00:10:58.904 13:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.904 13:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 335767a0-2bc2-4c0a-9555-4ba93a91633a 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.162 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.421 [2024-10-01 13:45:09.379129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:59.421 [2024-10-01 13:45:09.379171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.421 [2024-10-01 13:45:09.379184] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:59.421 [2024-10-01 13:45:09.379494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:59.421 [2024-10-01 13:45:09.379657] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.421 [2024-10-01 13:45:09.379668] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:59.421 [2024-10-01 13:45:09.379938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.421 NewBaseBdev 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.421 [ 00:10:59.421 { 00:10:59.421 "name": "NewBaseBdev", 00:10:59.421 "aliases": [ 00:10:59.421 "335767a0-2bc2-4c0a-9555-4ba93a91633a" 00:10:59.421 ], 00:10:59.421 "product_name": "Malloc disk", 00:10:59.421 "block_size": 512, 00:10:59.421 "num_blocks": 65536, 00:10:59.421 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:59.421 "assigned_rate_limits": { 00:10:59.421 "rw_ios_per_sec": 0, 00:10:59.421 "rw_mbytes_per_sec": 0, 00:10:59.421 "r_mbytes_per_sec": 0, 00:10:59.421 "w_mbytes_per_sec": 0 00:10:59.421 }, 00:10:59.421 "claimed": true, 00:10:59.421 "claim_type": "exclusive_write", 00:10:59.421 "zoned": false, 00:10:59.421 "supported_io_types": { 00:10:59.421 "read": true, 00:10:59.421 "write": true, 00:10:59.421 "unmap": true, 00:10:59.421 "flush": true, 00:10:59.421 "reset": true, 00:10:59.421 "nvme_admin": false, 00:10:59.421 "nvme_io": false, 00:10:59.421 "nvme_io_md": false, 00:10:59.421 "write_zeroes": true, 00:10:59.421 "zcopy": true, 00:10:59.421 "get_zone_info": false, 00:10:59.421 "zone_management": false, 00:10:59.421 "zone_append": false, 00:10:59.421 "compare": false, 00:10:59.421 "compare_and_write": false, 00:10:59.421 "abort": true, 00:10:59.421 "seek_hole": false, 00:10:59.421 "seek_data": false, 00:10:59.421 "copy": true, 00:10:59.421 "nvme_iov_md": false 00:10:59.421 }, 00:10:59.421 "memory_domains": [ 00:10:59.421 { 00:10:59.421 "dma_device_id": "system", 00:10:59.421 "dma_device_type": 1 00:10:59.421 }, 00:10:59.421 { 00:10:59.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.421 "dma_device_type": 2 00:10:59.421 } 00:10:59.421 ], 00:10:59.421 "driver_specific": {} 00:10:59.421 } 00:10:59.421 ] 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.421 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.422 "name": "Existed_Raid", 00:10:59.422 "uuid": "0f4929e1-ee4a-47c2-95a0-791ecea5b22e", 00:10:59.422 "strip_size_kb": 64, 00:10:59.422 "state": "online", 00:10:59.422 "raid_level": "raid0", 00:10:59.422 "superblock": false, 00:10:59.422 "num_base_bdevs": 3, 00:10:59.422 "num_base_bdevs_discovered": 3, 00:10:59.422 "num_base_bdevs_operational": 3, 00:10:59.422 "base_bdevs_list": [ 00:10:59.422 { 00:10:59.422 "name": "NewBaseBdev", 00:10:59.422 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:59.422 "is_configured": true, 00:10:59.422 "data_offset": 0, 00:10:59.422 "data_size": 65536 00:10:59.422 }, 00:10:59.422 { 00:10:59.422 "name": "BaseBdev2", 00:10:59.422 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:59.422 "is_configured": true, 00:10:59.422 "data_offset": 0, 00:10:59.422 "data_size": 65536 00:10:59.422 }, 00:10:59.422 { 00:10:59.422 "name": "BaseBdev3", 00:10:59.422 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:59.422 "is_configured": true, 00:10:59.422 "data_offset": 0, 00:10:59.422 "data_size": 65536 00:10:59.422 } 00:10:59.422 ] 00:10:59.422 }' 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.422 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.681 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.681 [2024-10-01 13:45:09.858852] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.940 "name": "Existed_Raid", 00:10:59.940 "aliases": [ 00:10:59.940 "0f4929e1-ee4a-47c2-95a0-791ecea5b22e" 00:10:59.940 ], 00:10:59.940 "product_name": "Raid Volume", 00:10:59.940 "block_size": 512, 00:10:59.940 "num_blocks": 196608, 00:10:59.940 "uuid": "0f4929e1-ee4a-47c2-95a0-791ecea5b22e", 00:10:59.940 "assigned_rate_limits": { 00:10:59.940 "rw_ios_per_sec": 0, 00:10:59.940 "rw_mbytes_per_sec": 0, 00:10:59.940 "r_mbytes_per_sec": 0, 00:10:59.940 "w_mbytes_per_sec": 0 00:10:59.940 }, 00:10:59.940 "claimed": false, 00:10:59.940 "zoned": false, 00:10:59.940 "supported_io_types": { 00:10:59.940 "read": true, 00:10:59.940 "write": true, 00:10:59.940 "unmap": true, 00:10:59.940 "flush": true, 00:10:59.940 "reset": true, 00:10:59.940 "nvme_admin": false, 00:10:59.940 "nvme_io": false, 00:10:59.940 "nvme_io_md": false, 00:10:59.940 "write_zeroes": true, 00:10:59.940 "zcopy": false, 00:10:59.940 "get_zone_info": false, 00:10:59.940 "zone_management": false, 00:10:59.940 "zone_append": false, 00:10:59.940 "compare": false, 00:10:59.940 "compare_and_write": false, 00:10:59.940 "abort": false, 00:10:59.940 "seek_hole": false, 00:10:59.940 "seek_data": false, 00:10:59.940 "copy": false, 00:10:59.940 "nvme_iov_md": false 00:10:59.940 }, 00:10:59.940 "memory_domains": [ 00:10:59.940 { 00:10:59.940 "dma_device_id": "system", 00:10:59.940 "dma_device_type": 1 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.940 "dma_device_type": 2 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "dma_device_id": "system", 00:10:59.940 "dma_device_type": 1 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.940 "dma_device_type": 2 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "dma_device_id": "system", 00:10:59.940 "dma_device_type": 1 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.940 "dma_device_type": 2 00:10:59.940 } 00:10:59.940 ], 00:10:59.940 "driver_specific": { 00:10:59.940 "raid": { 00:10:59.940 "uuid": "0f4929e1-ee4a-47c2-95a0-791ecea5b22e", 00:10:59.940 "strip_size_kb": 64, 00:10:59.940 "state": "online", 00:10:59.940 "raid_level": "raid0", 00:10:59.940 "superblock": false, 00:10:59.940 "num_base_bdevs": 3, 00:10:59.940 "num_base_bdevs_discovered": 3, 00:10:59.940 "num_base_bdevs_operational": 3, 00:10:59.940 "base_bdevs_list": [ 00:10:59.940 { 00:10:59.940 "name": "NewBaseBdev", 00:10:59.940 "uuid": "335767a0-2bc2-4c0a-9555-4ba93a91633a", 00:10:59.940 "is_configured": true, 00:10:59.940 "data_offset": 0, 00:10:59.940 "data_size": 65536 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "name": "BaseBdev2", 00:10:59.940 "uuid": "93502cb6-59ad-458a-aac5-37380a509cd6", 00:10:59.940 "is_configured": true, 00:10:59.940 "data_offset": 0, 00:10:59.940 "data_size": 65536 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "name": "BaseBdev3", 00:10:59.940 "uuid": "db3e08b3-4ed6-4619-88a2-f774de3ed6ee", 00:10:59.940 "is_configured": true, 00:10:59.940 "data_offset": 0, 00:10:59.940 "data_size": 65536 00:10:59.940 } 00:10:59.940 ] 00:10:59.940 } 00:10:59.940 } 00:10:59.940 }' 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:59.940 BaseBdev2 00:10:59.940 BaseBdev3' 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.940 13:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.940 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.940 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.940 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.940 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.941 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 [2024-10-01 13:45:10.130119] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.941 [2024-10-01 13:45:10.130277] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.941 [2024-10-01 13:45:10.130595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.200 [2024-10-01 13:45:10.130753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.200 [2024-10-01 13:45:10.130860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63718 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63718 ']' 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63718 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63718 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.200 killing process with pid 63718 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63718' 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63718 00:11:00.200 [2024-10-01 13:45:10.186652] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.200 13:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63718 00:11:00.458 [2024-10-01 13:45:10.505123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:01.833 00:11:01.833 real 0m10.898s 00:11:01.833 user 0m17.234s 00:11:01.833 sys 0m2.041s 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.833 ************************************ 00:11:01.833 END TEST raid_state_function_test 00:11:01.833 ************************************ 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.833 13:45:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:01.833 13:45:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:01.833 13:45:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.833 13:45:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.833 ************************************ 00:11:01.833 START TEST raid_state_function_test_sb 00:11:01.833 ************************************ 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:01.833 Process raid pid: 64345 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64345 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64345' 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64345 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64345 ']' 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.833 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.092 [2024-10-01 13:45:12.030706] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:02.092 [2024-10-01 13:45:12.031039] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.092 [2024-10-01 13:45:12.218564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.351 [2024-10-01 13:45:12.448204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.610 [2024-10-01 13:45:12.674254] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.610 [2024-10-01 13:45:12.674446] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.867 [2024-10-01 13:45:12.915523] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.867 [2024-10-01 13:45:12.915719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.867 [2024-10-01 13:45:12.915818] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.867 [2024-10-01 13:45:12.915959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.867 [2024-10-01 13:45:12.916037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.867 [2024-10-01 13:45:12.916081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.867 "name": "Existed_Raid", 00:11:02.867 "uuid": "c62f7f4a-f7af-41e5-8602-79b822cfbaf4", 00:11:02.867 "strip_size_kb": 64, 00:11:02.867 "state": "configuring", 00:11:02.867 "raid_level": "raid0", 00:11:02.867 "superblock": true, 00:11:02.867 "num_base_bdevs": 3, 00:11:02.867 "num_base_bdevs_discovered": 0, 00:11:02.867 "num_base_bdevs_operational": 3, 00:11:02.867 "base_bdevs_list": [ 00:11:02.867 { 00:11:02.867 "name": "BaseBdev1", 00:11:02.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.867 "is_configured": false, 00:11:02.867 "data_offset": 0, 00:11:02.867 "data_size": 0 00:11:02.867 }, 00:11:02.867 { 00:11:02.867 "name": "BaseBdev2", 00:11:02.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.867 "is_configured": false, 00:11:02.867 "data_offset": 0, 00:11:02.867 "data_size": 0 00:11:02.867 }, 00:11:02.867 { 00:11:02.867 "name": "BaseBdev3", 00:11:02.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.867 "is_configured": false, 00:11:02.867 "data_offset": 0, 00:11:02.867 "data_size": 0 00:11:02.867 } 00:11:02.867 ] 00:11:02.867 }' 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.867 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.435 [2024-10-01 13:45:13.327375] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.435 [2024-10-01 13:45:13.327425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.435 [2024-10-01 13:45:13.339410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.435 [2024-10-01 13:45:13.339455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.435 [2024-10-01 13:45:13.339465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.435 [2024-10-01 13:45:13.339478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.435 [2024-10-01 13:45:13.339486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.435 [2024-10-01 13:45:13.339499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.435 [2024-10-01 13:45:13.401347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.435 BaseBdev1 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.435 [ 00:11:03.435 { 00:11:03.435 "name": "BaseBdev1", 00:11:03.435 "aliases": [ 00:11:03.435 "4ed3a395-2bb8-43fe-b2d0-4480d3b81156" 00:11:03.435 ], 00:11:03.435 "product_name": "Malloc disk", 00:11:03.435 "block_size": 512, 00:11:03.435 "num_blocks": 65536, 00:11:03.435 "uuid": "4ed3a395-2bb8-43fe-b2d0-4480d3b81156", 00:11:03.435 "assigned_rate_limits": { 00:11:03.435 "rw_ios_per_sec": 0, 00:11:03.435 "rw_mbytes_per_sec": 0, 00:11:03.435 "r_mbytes_per_sec": 0, 00:11:03.435 "w_mbytes_per_sec": 0 00:11:03.435 }, 00:11:03.435 "claimed": true, 00:11:03.435 "claim_type": "exclusive_write", 00:11:03.435 "zoned": false, 00:11:03.435 "supported_io_types": { 00:11:03.435 "read": true, 00:11:03.435 "write": true, 00:11:03.435 "unmap": true, 00:11:03.435 "flush": true, 00:11:03.435 "reset": true, 00:11:03.435 "nvme_admin": false, 00:11:03.435 "nvme_io": false, 00:11:03.435 "nvme_io_md": false, 00:11:03.435 "write_zeroes": true, 00:11:03.435 "zcopy": true, 00:11:03.435 "get_zone_info": false, 00:11:03.435 "zone_management": false, 00:11:03.435 "zone_append": false, 00:11:03.435 "compare": false, 00:11:03.435 "compare_and_write": false, 00:11:03.435 "abort": true, 00:11:03.435 "seek_hole": false, 00:11:03.435 "seek_data": false, 00:11:03.435 "copy": true, 00:11:03.435 "nvme_iov_md": false 00:11:03.435 }, 00:11:03.435 "memory_domains": [ 00:11:03.435 { 00:11:03.435 "dma_device_id": "system", 00:11:03.435 "dma_device_type": 1 00:11:03.435 }, 00:11:03.435 { 00:11:03.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.435 "dma_device_type": 2 00:11:03.435 } 00:11:03.435 ], 00:11:03.435 "driver_specific": {} 00:11:03.435 } 00:11:03.435 ] 00:11:03.435 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.436 "name": "Existed_Raid", 00:11:03.436 "uuid": "238049fc-ee9a-4ebd-bae6-15aa044a2248", 00:11:03.436 "strip_size_kb": 64, 00:11:03.436 "state": "configuring", 00:11:03.436 "raid_level": "raid0", 00:11:03.436 "superblock": true, 00:11:03.436 "num_base_bdevs": 3, 00:11:03.436 "num_base_bdevs_discovered": 1, 00:11:03.436 "num_base_bdevs_operational": 3, 00:11:03.436 "base_bdevs_list": [ 00:11:03.436 { 00:11:03.436 "name": "BaseBdev1", 00:11:03.436 "uuid": "4ed3a395-2bb8-43fe-b2d0-4480d3b81156", 00:11:03.436 "is_configured": true, 00:11:03.436 "data_offset": 2048, 00:11:03.436 "data_size": 63488 00:11:03.436 }, 00:11:03.436 { 00:11:03.436 "name": "BaseBdev2", 00:11:03.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.436 "is_configured": false, 00:11:03.436 "data_offset": 0, 00:11:03.436 "data_size": 0 00:11:03.436 }, 00:11:03.436 { 00:11:03.436 "name": "BaseBdev3", 00:11:03.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.436 "is_configured": false, 00:11:03.436 "data_offset": 0, 00:11:03.436 "data_size": 0 00:11:03.436 } 00:11:03.436 ] 00:11:03.436 }' 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.436 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.695 [2024-10-01 13:45:13.868736] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.695 [2024-10-01 13:45:13.868789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.695 [2024-10-01 13:45:13.880764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.695 [2024-10-01 13:45:13.882946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.695 [2024-10-01 13:45:13.883092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.695 [2024-10-01 13:45:13.883171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.695 [2024-10-01 13:45:13.883213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.695 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.955 "name": "Existed_Raid", 00:11:03.955 "uuid": "0d373fcc-4eb9-43a7-98d9-f8328b2339de", 00:11:03.955 "strip_size_kb": 64, 00:11:03.955 "state": "configuring", 00:11:03.955 "raid_level": "raid0", 00:11:03.955 "superblock": true, 00:11:03.955 "num_base_bdevs": 3, 00:11:03.955 "num_base_bdevs_discovered": 1, 00:11:03.955 "num_base_bdevs_operational": 3, 00:11:03.955 "base_bdevs_list": [ 00:11:03.955 { 00:11:03.955 "name": "BaseBdev1", 00:11:03.955 "uuid": "4ed3a395-2bb8-43fe-b2d0-4480d3b81156", 00:11:03.955 "is_configured": true, 00:11:03.955 "data_offset": 2048, 00:11:03.955 "data_size": 63488 00:11:03.955 }, 00:11:03.955 { 00:11:03.955 "name": "BaseBdev2", 00:11:03.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.955 "is_configured": false, 00:11:03.955 "data_offset": 0, 00:11:03.955 "data_size": 0 00:11:03.955 }, 00:11:03.955 { 00:11:03.955 "name": "BaseBdev3", 00:11:03.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.955 "is_configured": false, 00:11:03.955 "data_offset": 0, 00:11:03.955 "data_size": 0 00:11:03.955 } 00:11:03.955 ] 00:11:03.955 }' 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.955 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 [2024-10-01 13:45:14.389859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.214 BaseBdev2 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.214 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.473 [ 00:11:04.473 { 00:11:04.473 "name": "BaseBdev2", 00:11:04.473 "aliases": [ 00:11:04.473 "cecedb7a-df01-4dc3-8019-1fd7ff31f009" 00:11:04.473 ], 00:11:04.473 "product_name": "Malloc disk", 00:11:04.473 "block_size": 512, 00:11:04.473 "num_blocks": 65536, 00:11:04.473 "uuid": "cecedb7a-df01-4dc3-8019-1fd7ff31f009", 00:11:04.473 "assigned_rate_limits": { 00:11:04.473 "rw_ios_per_sec": 0, 00:11:04.473 "rw_mbytes_per_sec": 0, 00:11:04.473 "r_mbytes_per_sec": 0, 00:11:04.473 "w_mbytes_per_sec": 0 00:11:04.473 }, 00:11:04.473 "claimed": true, 00:11:04.473 "claim_type": "exclusive_write", 00:11:04.473 "zoned": false, 00:11:04.473 "supported_io_types": { 00:11:04.473 "read": true, 00:11:04.474 "write": true, 00:11:04.474 "unmap": true, 00:11:04.474 "flush": true, 00:11:04.474 "reset": true, 00:11:04.474 "nvme_admin": false, 00:11:04.474 "nvme_io": false, 00:11:04.474 "nvme_io_md": false, 00:11:04.474 "write_zeroes": true, 00:11:04.474 "zcopy": true, 00:11:04.474 "get_zone_info": false, 00:11:04.474 "zone_management": false, 00:11:04.474 "zone_append": false, 00:11:04.474 "compare": false, 00:11:04.474 "compare_and_write": false, 00:11:04.474 "abort": true, 00:11:04.474 "seek_hole": false, 00:11:04.474 "seek_data": false, 00:11:04.474 "copy": true, 00:11:04.474 "nvme_iov_md": false 00:11:04.474 }, 00:11:04.474 "memory_domains": [ 00:11:04.474 { 00:11:04.474 "dma_device_id": "system", 00:11:04.474 "dma_device_type": 1 00:11:04.474 }, 00:11:04.474 { 00:11:04.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.474 "dma_device_type": 2 00:11:04.474 } 00:11:04.474 ], 00:11:04.474 "driver_specific": {} 00:11:04.474 } 00:11:04.474 ] 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.474 "name": "Existed_Raid", 00:11:04.474 "uuid": "0d373fcc-4eb9-43a7-98d9-f8328b2339de", 00:11:04.474 "strip_size_kb": 64, 00:11:04.474 "state": "configuring", 00:11:04.474 "raid_level": "raid0", 00:11:04.474 "superblock": true, 00:11:04.474 "num_base_bdevs": 3, 00:11:04.474 "num_base_bdevs_discovered": 2, 00:11:04.474 "num_base_bdevs_operational": 3, 00:11:04.474 "base_bdevs_list": [ 00:11:04.474 { 00:11:04.474 "name": "BaseBdev1", 00:11:04.474 "uuid": "4ed3a395-2bb8-43fe-b2d0-4480d3b81156", 00:11:04.474 "is_configured": true, 00:11:04.474 "data_offset": 2048, 00:11:04.474 "data_size": 63488 00:11:04.474 }, 00:11:04.474 { 00:11:04.474 "name": "BaseBdev2", 00:11:04.474 "uuid": "cecedb7a-df01-4dc3-8019-1fd7ff31f009", 00:11:04.474 "is_configured": true, 00:11:04.474 "data_offset": 2048, 00:11:04.474 "data_size": 63488 00:11:04.474 }, 00:11:04.474 { 00:11:04.474 "name": "BaseBdev3", 00:11:04.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.474 "is_configured": false, 00:11:04.474 "data_offset": 0, 00:11:04.474 "data_size": 0 00:11:04.474 } 00:11:04.474 ] 00:11:04.474 }' 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.474 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.734 [2024-10-01 13:45:14.904730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.734 [2024-10-01 13:45:14.904977] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.734 [2024-10-01 13:45:14.904999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:04.734 [2024-10-01 13:45:14.905273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:04.734 [2024-10-01 13:45:14.905432] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.734 [2024-10-01 13:45:14.905446] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:04.734 BaseBdev3 00:11:04.734 [2024-10-01 13:45:14.905594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.734 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.994 [ 00:11:04.994 { 00:11:04.994 "name": "BaseBdev3", 00:11:04.994 "aliases": [ 00:11:04.994 "b0c6858b-e0f7-4d51-9bd0-91969bd2da7c" 00:11:04.994 ], 00:11:04.994 "product_name": "Malloc disk", 00:11:04.994 "block_size": 512, 00:11:04.994 "num_blocks": 65536, 00:11:04.994 "uuid": "b0c6858b-e0f7-4d51-9bd0-91969bd2da7c", 00:11:04.994 "assigned_rate_limits": { 00:11:04.994 "rw_ios_per_sec": 0, 00:11:04.994 "rw_mbytes_per_sec": 0, 00:11:04.994 "r_mbytes_per_sec": 0, 00:11:04.994 "w_mbytes_per_sec": 0 00:11:04.994 }, 00:11:04.994 "claimed": true, 00:11:04.994 "claim_type": "exclusive_write", 00:11:04.994 "zoned": false, 00:11:04.994 "supported_io_types": { 00:11:04.994 "read": true, 00:11:04.994 "write": true, 00:11:04.994 "unmap": true, 00:11:04.994 "flush": true, 00:11:04.994 "reset": true, 00:11:04.994 "nvme_admin": false, 00:11:04.994 "nvme_io": false, 00:11:04.994 "nvme_io_md": false, 00:11:04.994 "write_zeroes": true, 00:11:04.994 "zcopy": true, 00:11:04.994 "get_zone_info": false, 00:11:04.994 "zone_management": false, 00:11:04.994 "zone_append": false, 00:11:04.994 "compare": false, 00:11:04.994 "compare_and_write": false, 00:11:04.994 "abort": true, 00:11:04.994 "seek_hole": false, 00:11:04.994 "seek_data": false, 00:11:04.994 "copy": true, 00:11:04.994 "nvme_iov_md": false 00:11:04.994 }, 00:11:04.994 "memory_domains": [ 00:11:04.994 { 00:11:04.994 "dma_device_id": "system", 00:11:04.994 "dma_device_type": 1 00:11:04.994 }, 00:11:04.994 { 00:11:04.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.994 "dma_device_type": 2 00:11:04.994 } 00:11:04.994 ], 00:11:04.994 "driver_specific": {} 00:11:04.994 } 00:11:04.994 ] 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.994 "name": "Existed_Raid", 00:11:04.994 "uuid": "0d373fcc-4eb9-43a7-98d9-f8328b2339de", 00:11:04.994 "strip_size_kb": 64, 00:11:04.994 "state": "online", 00:11:04.994 "raid_level": "raid0", 00:11:04.994 "superblock": true, 00:11:04.994 "num_base_bdevs": 3, 00:11:04.994 "num_base_bdevs_discovered": 3, 00:11:04.994 "num_base_bdevs_operational": 3, 00:11:04.994 "base_bdevs_list": [ 00:11:04.994 { 00:11:04.994 "name": "BaseBdev1", 00:11:04.994 "uuid": "4ed3a395-2bb8-43fe-b2d0-4480d3b81156", 00:11:04.994 "is_configured": true, 00:11:04.994 "data_offset": 2048, 00:11:04.994 "data_size": 63488 00:11:04.994 }, 00:11:04.994 { 00:11:04.994 "name": "BaseBdev2", 00:11:04.994 "uuid": "cecedb7a-df01-4dc3-8019-1fd7ff31f009", 00:11:04.994 "is_configured": true, 00:11:04.994 "data_offset": 2048, 00:11:04.994 "data_size": 63488 00:11:04.994 }, 00:11:04.994 { 00:11:04.994 "name": "BaseBdev3", 00:11:04.994 "uuid": "b0c6858b-e0f7-4d51-9bd0-91969bd2da7c", 00:11:04.994 "is_configured": true, 00:11:04.994 "data_offset": 2048, 00:11:04.994 "data_size": 63488 00:11:04.994 } 00:11:04.994 ] 00:11:04.994 }' 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.994 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.253 [2024-10-01 13:45:15.392448] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.253 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.253 "name": "Existed_Raid", 00:11:05.253 "aliases": [ 00:11:05.253 "0d373fcc-4eb9-43a7-98d9-f8328b2339de" 00:11:05.253 ], 00:11:05.253 "product_name": "Raid Volume", 00:11:05.253 "block_size": 512, 00:11:05.253 "num_blocks": 190464, 00:11:05.253 "uuid": "0d373fcc-4eb9-43a7-98d9-f8328b2339de", 00:11:05.253 "assigned_rate_limits": { 00:11:05.253 "rw_ios_per_sec": 0, 00:11:05.253 "rw_mbytes_per_sec": 0, 00:11:05.253 "r_mbytes_per_sec": 0, 00:11:05.253 "w_mbytes_per_sec": 0 00:11:05.253 }, 00:11:05.253 "claimed": false, 00:11:05.253 "zoned": false, 00:11:05.253 "supported_io_types": { 00:11:05.253 "read": true, 00:11:05.253 "write": true, 00:11:05.253 "unmap": true, 00:11:05.253 "flush": true, 00:11:05.253 "reset": true, 00:11:05.253 "nvme_admin": false, 00:11:05.253 "nvme_io": false, 00:11:05.253 "nvme_io_md": false, 00:11:05.253 "write_zeroes": true, 00:11:05.253 "zcopy": false, 00:11:05.253 "get_zone_info": false, 00:11:05.253 "zone_management": false, 00:11:05.253 "zone_append": false, 00:11:05.253 "compare": false, 00:11:05.253 "compare_and_write": false, 00:11:05.253 "abort": false, 00:11:05.253 "seek_hole": false, 00:11:05.253 "seek_data": false, 00:11:05.253 "copy": false, 00:11:05.253 "nvme_iov_md": false 00:11:05.253 }, 00:11:05.253 "memory_domains": [ 00:11:05.253 { 00:11:05.253 "dma_device_id": "system", 00:11:05.253 "dma_device_type": 1 00:11:05.253 }, 00:11:05.253 { 00:11:05.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.253 "dma_device_type": 2 00:11:05.253 }, 00:11:05.253 { 00:11:05.253 "dma_device_id": "system", 00:11:05.253 "dma_device_type": 1 00:11:05.253 }, 00:11:05.253 { 00:11:05.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.253 "dma_device_type": 2 00:11:05.253 }, 00:11:05.253 { 00:11:05.253 "dma_device_id": "system", 00:11:05.253 "dma_device_type": 1 00:11:05.253 }, 00:11:05.253 { 00:11:05.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.253 "dma_device_type": 2 00:11:05.253 } 00:11:05.253 ], 00:11:05.253 "driver_specific": { 00:11:05.253 "raid": { 00:11:05.253 "uuid": "0d373fcc-4eb9-43a7-98d9-f8328b2339de", 00:11:05.253 "strip_size_kb": 64, 00:11:05.253 "state": "online", 00:11:05.253 "raid_level": "raid0", 00:11:05.253 "superblock": true, 00:11:05.253 "num_base_bdevs": 3, 00:11:05.253 "num_base_bdevs_discovered": 3, 00:11:05.253 "num_base_bdevs_operational": 3, 00:11:05.253 "base_bdevs_list": [ 00:11:05.253 { 00:11:05.253 "name": "BaseBdev1", 00:11:05.253 "uuid": "4ed3a395-2bb8-43fe-b2d0-4480d3b81156", 00:11:05.253 "is_configured": true, 00:11:05.253 "data_offset": 2048, 00:11:05.253 "data_size": 63488 00:11:05.253 }, 00:11:05.253 { 00:11:05.253 "name": "BaseBdev2", 00:11:05.254 "uuid": "cecedb7a-df01-4dc3-8019-1fd7ff31f009", 00:11:05.254 "is_configured": true, 00:11:05.254 "data_offset": 2048, 00:11:05.254 "data_size": 63488 00:11:05.254 }, 00:11:05.254 { 00:11:05.254 "name": "BaseBdev3", 00:11:05.254 "uuid": "b0c6858b-e0f7-4d51-9bd0-91969bd2da7c", 00:11:05.254 "is_configured": true, 00:11:05.254 "data_offset": 2048, 00:11:05.254 "data_size": 63488 00:11:05.254 } 00:11:05.254 ] 00:11:05.254 } 00:11:05.254 } 00:11:05.254 }' 00:11:05.254 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.513 BaseBdev2 00:11:05.513 BaseBdev3' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.513 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.513 [2024-10-01 13:45:15.643799] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.513 [2024-10-01 13:45:15.643938] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.513 [2024-10-01 13:45:15.644105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.774 "name": "Existed_Raid", 00:11:05.774 "uuid": "0d373fcc-4eb9-43a7-98d9-f8328b2339de", 00:11:05.774 "strip_size_kb": 64, 00:11:05.774 "state": "offline", 00:11:05.774 "raid_level": "raid0", 00:11:05.774 "superblock": true, 00:11:05.774 "num_base_bdevs": 3, 00:11:05.774 "num_base_bdevs_discovered": 2, 00:11:05.774 "num_base_bdevs_operational": 2, 00:11:05.774 "base_bdevs_list": [ 00:11:05.774 { 00:11:05.774 "name": null, 00:11:05.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.774 "is_configured": false, 00:11:05.774 "data_offset": 0, 00:11:05.774 "data_size": 63488 00:11:05.774 }, 00:11:05.774 { 00:11:05.774 "name": "BaseBdev2", 00:11:05.774 "uuid": "cecedb7a-df01-4dc3-8019-1fd7ff31f009", 00:11:05.774 "is_configured": true, 00:11:05.774 "data_offset": 2048, 00:11:05.774 "data_size": 63488 00:11:05.774 }, 00:11:05.774 { 00:11:05.774 "name": "BaseBdev3", 00:11:05.774 "uuid": "b0c6858b-e0f7-4d51-9bd0-91969bd2da7c", 00:11:05.774 "is_configured": true, 00:11:05.774 "data_offset": 2048, 00:11:05.774 "data_size": 63488 00:11:05.774 } 00:11:05.774 ] 00:11:05.774 }' 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.774 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.033 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.033 [2024-10-01 13:45:16.196984] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.294 [2024-10-01 13:45:16.345380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.294 [2024-10-01 13:45:16.345559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.294 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.554 BaseBdev2 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.554 [ 00:11:06.554 { 00:11:06.554 "name": "BaseBdev2", 00:11:06.554 "aliases": [ 00:11:06.554 "b9e272f4-5d6d-46bb-80b0-63f059d21aa3" 00:11:06.554 ], 00:11:06.554 "product_name": "Malloc disk", 00:11:06.554 "block_size": 512, 00:11:06.554 "num_blocks": 65536, 00:11:06.554 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:06.554 "assigned_rate_limits": { 00:11:06.554 "rw_ios_per_sec": 0, 00:11:06.554 "rw_mbytes_per_sec": 0, 00:11:06.554 "r_mbytes_per_sec": 0, 00:11:06.554 "w_mbytes_per_sec": 0 00:11:06.554 }, 00:11:06.554 "claimed": false, 00:11:06.554 "zoned": false, 00:11:06.554 "supported_io_types": { 00:11:06.554 "read": true, 00:11:06.554 "write": true, 00:11:06.554 "unmap": true, 00:11:06.554 "flush": true, 00:11:06.554 "reset": true, 00:11:06.554 "nvme_admin": false, 00:11:06.554 "nvme_io": false, 00:11:06.554 "nvme_io_md": false, 00:11:06.554 "write_zeroes": true, 00:11:06.554 "zcopy": true, 00:11:06.554 "get_zone_info": false, 00:11:06.554 "zone_management": false, 00:11:06.554 "zone_append": false, 00:11:06.554 "compare": false, 00:11:06.554 "compare_and_write": false, 00:11:06.554 "abort": true, 00:11:06.554 "seek_hole": false, 00:11:06.554 "seek_data": false, 00:11:06.554 "copy": true, 00:11:06.554 "nvme_iov_md": false 00:11:06.554 }, 00:11:06.554 "memory_domains": [ 00:11:06.554 { 00:11:06.554 "dma_device_id": "system", 00:11:06.554 "dma_device_type": 1 00:11:06.554 }, 00:11:06.554 { 00:11:06.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.554 "dma_device_type": 2 00:11:06.554 } 00:11:06.554 ], 00:11:06.554 "driver_specific": {} 00:11:06.554 } 00:11:06.554 ] 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.554 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.555 BaseBdev3 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.555 [ 00:11:06.555 { 00:11:06.555 "name": "BaseBdev3", 00:11:06.555 "aliases": [ 00:11:06.555 "d2682f8f-6823-4160-8a6c-e8fdf9c5e102" 00:11:06.555 ], 00:11:06.555 "product_name": "Malloc disk", 00:11:06.555 "block_size": 512, 00:11:06.555 "num_blocks": 65536, 00:11:06.555 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:06.555 "assigned_rate_limits": { 00:11:06.555 "rw_ios_per_sec": 0, 00:11:06.555 "rw_mbytes_per_sec": 0, 00:11:06.555 "r_mbytes_per_sec": 0, 00:11:06.555 "w_mbytes_per_sec": 0 00:11:06.555 }, 00:11:06.555 "claimed": false, 00:11:06.555 "zoned": false, 00:11:06.555 "supported_io_types": { 00:11:06.555 "read": true, 00:11:06.555 "write": true, 00:11:06.555 "unmap": true, 00:11:06.555 "flush": true, 00:11:06.555 "reset": true, 00:11:06.555 "nvme_admin": false, 00:11:06.555 "nvme_io": false, 00:11:06.555 "nvme_io_md": false, 00:11:06.555 "write_zeroes": true, 00:11:06.555 "zcopy": true, 00:11:06.555 "get_zone_info": false, 00:11:06.555 "zone_management": false, 00:11:06.555 "zone_append": false, 00:11:06.555 "compare": false, 00:11:06.555 "compare_and_write": false, 00:11:06.555 "abort": true, 00:11:06.555 "seek_hole": false, 00:11:06.555 "seek_data": false, 00:11:06.555 "copy": true, 00:11:06.555 "nvme_iov_md": false 00:11:06.555 }, 00:11:06.555 "memory_domains": [ 00:11:06.555 { 00:11:06.555 "dma_device_id": "system", 00:11:06.555 "dma_device_type": 1 00:11:06.555 }, 00:11:06.555 { 00:11:06.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.555 "dma_device_type": 2 00:11:06.555 } 00:11:06.555 ], 00:11:06.555 "driver_specific": {} 00:11:06.555 } 00:11:06.555 ] 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.555 [2024-10-01 13:45:16.673896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.555 [2024-10-01 13:45:16.673952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.555 [2024-10-01 13:45:16.673982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.555 [2024-10-01 13:45:16.676072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.555 "name": "Existed_Raid", 00:11:06.555 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:06.555 "strip_size_kb": 64, 00:11:06.555 "state": "configuring", 00:11:06.555 "raid_level": "raid0", 00:11:06.555 "superblock": true, 00:11:06.555 "num_base_bdevs": 3, 00:11:06.555 "num_base_bdevs_discovered": 2, 00:11:06.555 "num_base_bdevs_operational": 3, 00:11:06.555 "base_bdevs_list": [ 00:11:06.555 { 00:11:06.555 "name": "BaseBdev1", 00:11:06.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.555 "is_configured": false, 00:11:06.555 "data_offset": 0, 00:11:06.555 "data_size": 0 00:11:06.555 }, 00:11:06.555 { 00:11:06.555 "name": "BaseBdev2", 00:11:06.555 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:06.555 "is_configured": true, 00:11:06.555 "data_offset": 2048, 00:11:06.555 "data_size": 63488 00:11:06.555 }, 00:11:06.555 { 00:11:06.555 "name": "BaseBdev3", 00:11:06.555 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:06.555 "is_configured": true, 00:11:06.555 "data_offset": 2048, 00:11:06.555 "data_size": 63488 00:11:06.555 } 00:11:06.555 ] 00:11:06.555 }' 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.555 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.123 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.123 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.123 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.123 [2024-10-01 13:45:17.121346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.123 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.123 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.124 "name": "Existed_Raid", 00:11:07.124 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:07.124 "strip_size_kb": 64, 00:11:07.124 "state": "configuring", 00:11:07.124 "raid_level": "raid0", 00:11:07.124 "superblock": true, 00:11:07.124 "num_base_bdevs": 3, 00:11:07.124 "num_base_bdevs_discovered": 1, 00:11:07.124 "num_base_bdevs_operational": 3, 00:11:07.124 "base_bdevs_list": [ 00:11:07.124 { 00:11:07.124 "name": "BaseBdev1", 00:11:07.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.124 "is_configured": false, 00:11:07.124 "data_offset": 0, 00:11:07.124 "data_size": 0 00:11:07.124 }, 00:11:07.124 { 00:11:07.124 "name": null, 00:11:07.124 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:07.124 "is_configured": false, 00:11:07.124 "data_offset": 0, 00:11:07.124 "data_size": 63488 00:11:07.124 }, 00:11:07.124 { 00:11:07.124 "name": "BaseBdev3", 00:11:07.124 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:07.124 "is_configured": true, 00:11:07.124 "data_offset": 2048, 00:11:07.124 "data_size": 63488 00:11:07.124 } 00:11:07.124 ] 00:11:07.124 }' 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.124 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.383 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.642 BaseBdev1 00:11:07.642 [2024-10-01 13:45:17.587809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.642 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.643 [ 00:11:07.643 { 00:11:07.643 "name": "BaseBdev1", 00:11:07.643 "aliases": [ 00:11:07.643 "52ce8f23-70cd-4616-9fff-859e8a597898" 00:11:07.643 ], 00:11:07.643 "product_name": "Malloc disk", 00:11:07.643 "block_size": 512, 00:11:07.643 "num_blocks": 65536, 00:11:07.643 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:07.643 "assigned_rate_limits": { 00:11:07.643 "rw_ios_per_sec": 0, 00:11:07.643 "rw_mbytes_per_sec": 0, 00:11:07.643 "r_mbytes_per_sec": 0, 00:11:07.643 "w_mbytes_per_sec": 0 00:11:07.643 }, 00:11:07.643 "claimed": true, 00:11:07.643 "claim_type": "exclusive_write", 00:11:07.643 "zoned": false, 00:11:07.643 "supported_io_types": { 00:11:07.643 "read": true, 00:11:07.643 "write": true, 00:11:07.643 "unmap": true, 00:11:07.643 "flush": true, 00:11:07.643 "reset": true, 00:11:07.643 "nvme_admin": false, 00:11:07.643 "nvme_io": false, 00:11:07.643 "nvme_io_md": false, 00:11:07.643 "write_zeroes": true, 00:11:07.643 "zcopy": true, 00:11:07.643 "get_zone_info": false, 00:11:07.643 "zone_management": false, 00:11:07.643 "zone_append": false, 00:11:07.643 "compare": false, 00:11:07.643 "compare_and_write": false, 00:11:07.643 "abort": true, 00:11:07.643 "seek_hole": false, 00:11:07.643 "seek_data": false, 00:11:07.643 "copy": true, 00:11:07.643 "nvme_iov_md": false 00:11:07.643 }, 00:11:07.643 "memory_domains": [ 00:11:07.643 { 00:11:07.643 "dma_device_id": "system", 00:11:07.643 "dma_device_type": 1 00:11:07.643 }, 00:11:07.643 { 00:11:07.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.643 "dma_device_type": 2 00:11:07.643 } 00:11:07.643 ], 00:11:07.643 "driver_specific": {} 00:11:07.643 } 00:11:07.643 ] 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.643 "name": "Existed_Raid", 00:11:07.643 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:07.643 "strip_size_kb": 64, 00:11:07.643 "state": "configuring", 00:11:07.643 "raid_level": "raid0", 00:11:07.643 "superblock": true, 00:11:07.643 "num_base_bdevs": 3, 00:11:07.643 "num_base_bdevs_discovered": 2, 00:11:07.643 "num_base_bdevs_operational": 3, 00:11:07.643 "base_bdevs_list": [ 00:11:07.643 { 00:11:07.643 "name": "BaseBdev1", 00:11:07.643 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:07.643 "is_configured": true, 00:11:07.643 "data_offset": 2048, 00:11:07.643 "data_size": 63488 00:11:07.643 }, 00:11:07.643 { 00:11:07.643 "name": null, 00:11:07.643 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:07.643 "is_configured": false, 00:11:07.643 "data_offset": 0, 00:11:07.643 "data_size": 63488 00:11:07.643 }, 00:11:07.643 { 00:11:07.643 "name": "BaseBdev3", 00:11:07.643 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:07.643 "is_configured": true, 00:11:07.643 "data_offset": 2048, 00:11:07.643 "data_size": 63488 00:11:07.643 } 00:11:07.643 ] 00:11:07.643 }' 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.643 13:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.902 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.162 [2024-10-01 13:45:18.095446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.162 "name": "Existed_Raid", 00:11:08.162 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:08.162 "strip_size_kb": 64, 00:11:08.162 "state": "configuring", 00:11:08.162 "raid_level": "raid0", 00:11:08.162 "superblock": true, 00:11:08.162 "num_base_bdevs": 3, 00:11:08.162 "num_base_bdevs_discovered": 1, 00:11:08.162 "num_base_bdevs_operational": 3, 00:11:08.162 "base_bdevs_list": [ 00:11:08.162 { 00:11:08.162 "name": "BaseBdev1", 00:11:08.162 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:08.162 "is_configured": true, 00:11:08.162 "data_offset": 2048, 00:11:08.162 "data_size": 63488 00:11:08.162 }, 00:11:08.162 { 00:11:08.162 "name": null, 00:11:08.162 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:08.162 "is_configured": false, 00:11:08.162 "data_offset": 0, 00:11:08.162 "data_size": 63488 00:11:08.162 }, 00:11:08.162 { 00:11:08.162 "name": null, 00:11:08.162 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:08.162 "is_configured": false, 00:11:08.162 "data_offset": 0, 00:11:08.162 "data_size": 63488 00:11:08.162 } 00:11:08.162 ] 00:11:08.162 }' 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.162 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.421 [2024-10-01 13:45:18.555330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.421 "name": "Existed_Raid", 00:11:08.421 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:08.421 "strip_size_kb": 64, 00:11:08.421 "state": "configuring", 00:11:08.421 "raid_level": "raid0", 00:11:08.421 "superblock": true, 00:11:08.421 "num_base_bdevs": 3, 00:11:08.421 "num_base_bdevs_discovered": 2, 00:11:08.421 "num_base_bdevs_operational": 3, 00:11:08.421 "base_bdevs_list": [ 00:11:08.421 { 00:11:08.421 "name": "BaseBdev1", 00:11:08.421 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:08.421 "is_configured": true, 00:11:08.421 "data_offset": 2048, 00:11:08.421 "data_size": 63488 00:11:08.421 }, 00:11:08.421 { 00:11:08.421 "name": null, 00:11:08.421 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:08.421 "is_configured": false, 00:11:08.421 "data_offset": 0, 00:11:08.421 "data_size": 63488 00:11:08.421 }, 00:11:08.421 { 00:11:08.421 "name": "BaseBdev3", 00:11:08.421 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:08.421 "is_configured": true, 00:11:08.421 "data_offset": 2048, 00:11:08.421 "data_size": 63488 00:11:08.421 } 00:11:08.421 ] 00:11:08.421 }' 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.421 13:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.990 [2024-10-01 13:45:19.066654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.990 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.250 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.250 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.250 "name": "Existed_Raid", 00:11:09.250 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:09.250 "strip_size_kb": 64, 00:11:09.250 "state": "configuring", 00:11:09.250 "raid_level": "raid0", 00:11:09.250 "superblock": true, 00:11:09.250 "num_base_bdevs": 3, 00:11:09.250 "num_base_bdevs_discovered": 1, 00:11:09.250 "num_base_bdevs_operational": 3, 00:11:09.250 "base_bdevs_list": [ 00:11:09.250 { 00:11:09.250 "name": null, 00:11:09.250 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:09.250 "is_configured": false, 00:11:09.250 "data_offset": 0, 00:11:09.250 "data_size": 63488 00:11:09.250 }, 00:11:09.250 { 00:11:09.250 "name": null, 00:11:09.250 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:09.250 "is_configured": false, 00:11:09.250 "data_offset": 0, 00:11:09.250 "data_size": 63488 00:11:09.250 }, 00:11:09.250 { 00:11:09.250 "name": "BaseBdev3", 00:11:09.250 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:09.250 "is_configured": true, 00:11:09.250 "data_offset": 2048, 00:11:09.250 "data_size": 63488 00:11:09.250 } 00:11:09.250 ] 00:11:09.250 }' 00:11:09.250 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.250 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.510 [2024-10-01 13:45:19.654288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.510 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.769 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.769 "name": "Existed_Raid", 00:11:09.769 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:09.770 "strip_size_kb": 64, 00:11:09.770 "state": "configuring", 00:11:09.770 "raid_level": "raid0", 00:11:09.770 "superblock": true, 00:11:09.770 "num_base_bdevs": 3, 00:11:09.770 "num_base_bdevs_discovered": 2, 00:11:09.770 "num_base_bdevs_operational": 3, 00:11:09.770 "base_bdevs_list": [ 00:11:09.770 { 00:11:09.770 "name": null, 00:11:09.770 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:09.770 "is_configured": false, 00:11:09.770 "data_offset": 0, 00:11:09.770 "data_size": 63488 00:11:09.770 }, 00:11:09.770 { 00:11:09.770 "name": "BaseBdev2", 00:11:09.770 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:09.770 "is_configured": true, 00:11:09.770 "data_offset": 2048, 00:11:09.770 "data_size": 63488 00:11:09.770 }, 00:11:09.770 { 00:11:09.770 "name": "BaseBdev3", 00:11:09.770 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:09.770 "is_configured": true, 00:11:09.770 "data_offset": 2048, 00:11:09.770 "data_size": 63488 00:11:09.770 } 00:11:09.770 ] 00:11:09.770 }' 00:11:09.770 13:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.770 13:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:10.029 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 52ce8f23-70cd-4616-9fff-859e8a597898 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.288 [2024-10-01 13:45:20.272743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:10.288 NewBaseBdev 00:11:10.288 [2024-10-01 13:45:20.273118] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:10.288 [2024-10-01 13:45:20.273146] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:10.288 [2024-10-01 13:45:20.273415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:10.288 [2024-10-01 13:45:20.273564] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:10.288 [2024-10-01 13:45:20.273575] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.288 [2024-10-01 13:45:20.273714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.288 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.289 [ 00:11:10.289 { 00:11:10.289 "name": "NewBaseBdev", 00:11:10.289 "aliases": [ 00:11:10.289 "52ce8f23-70cd-4616-9fff-859e8a597898" 00:11:10.289 ], 00:11:10.289 "product_name": "Malloc disk", 00:11:10.289 "block_size": 512, 00:11:10.289 "num_blocks": 65536, 00:11:10.289 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:10.289 "assigned_rate_limits": { 00:11:10.289 "rw_ios_per_sec": 0, 00:11:10.289 "rw_mbytes_per_sec": 0, 00:11:10.289 "r_mbytes_per_sec": 0, 00:11:10.289 "w_mbytes_per_sec": 0 00:11:10.289 }, 00:11:10.289 "claimed": true, 00:11:10.289 "claim_type": "exclusive_write", 00:11:10.289 "zoned": false, 00:11:10.289 "supported_io_types": { 00:11:10.289 "read": true, 00:11:10.289 "write": true, 00:11:10.289 "unmap": true, 00:11:10.289 "flush": true, 00:11:10.289 "reset": true, 00:11:10.289 "nvme_admin": false, 00:11:10.289 "nvme_io": false, 00:11:10.289 "nvme_io_md": false, 00:11:10.289 "write_zeroes": true, 00:11:10.289 "zcopy": true, 00:11:10.289 "get_zone_info": false, 00:11:10.289 "zone_management": false, 00:11:10.289 "zone_append": false, 00:11:10.289 "compare": false, 00:11:10.289 "compare_and_write": false, 00:11:10.289 "abort": true, 00:11:10.289 "seek_hole": false, 00:11:10.289 "seek_data": false, 00:11:10.289 "copy": true, 00:11:10.289 "nvme_iov_md": false 00:11:10.289 }, 00:11:10.289 "memory_domains": [ 00:11:10.289 { 00:11:10.289 "dma_device_id": "system", 00:11:10.289 "dma_device_type": 1 00:11:10.289 }, 00:11:10.289 { 00:11:10.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.289 "dma_device_type": 2 00:11:10.289 } 00:11:10.289 ], 00:11:10.289 "driver_specific": {} 00:11:10.289 } 00:11:10.289 ] 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.289 "name": "Existed_Raid", 00:11:10.289 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:10.289 "strip_size_kb": 64, 00:11:10.289 "state": "online", 00:11:10.289 "raid_level": "raid0", 00:11:10.289 "superblock": true, 00:11:10.289 "num_base_bdevs": 3, 00:11:10.289 "num_base_bdevs_discovered": 3, 00:11:10.289 "num_base_bdevs_operational": 3, 00:11:10.289 "base_bdevs_list": [ 00:11:10.289 { 00:11:10.289 "name": "NewBaseBdev", 00:11:10.289 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:10.289 "is_configured": true, 00:11:10.289 "data_offset": 2048, 00:11:10.289 "data_size": 63488 00:11:10.289 }, 00:11:10.289 { 00:11:10.289 "name": "BaseBdev2", 00:11:10.289 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:10.289 "is_configured": true, 00:11:10.289 "data_offset": 2048, 00:11:10.289 "data_size": 63488 00:11:10.289 }, 00:11:10.289 { 00:11:10.289 "name": "BaseBdev3", 00:11:10.289 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:10.289 "is_configured": true, 00:11:10.289 "data_offset": 2048, 00:11:10.289 "data_size": 63488 00:11:10.289 } 00:11:10.289 ] 00:11:10.289 }' 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.289 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.548 [2024-10-01 13:45:20.700684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.548 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.808 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.808 "name": "Existed_Raid", 00:11:10.808 "aliases": [ 00:11:10.808 "a0918e1d-c03a-4a13-b8eb-bb80df797893" 00:11:10.808 ], 00:11:10.808 "product_name": "Raid Volume", 00:11:10.808 "block_size": 512, 00:11:10.808 "num_blocks": 190464, 00:11:10.808 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:10.808 "assigned_rate_limits": { 00:11:10.808 "rw_ios_per_sec": 0, 00:11:10.808 "rw_mbytes_per_sec": 0, 00:11:10.808 "r_mbytes_per_sec": 0, 00:11:10.808 "w_mbytes_per_sec": 0 00:11:10.808 }, 00:11:10.808 "claimed": false, 00:11:10.808 "zoned": false, 00:11:10.808 "supported_io_types": { 00:11:10.808 "read": true, 00:11:10.808 "write": true, 00:11:10.808 "unmap": true, 00:11:10.808 "flush": true, 00:11:10.808 "reset": true, 00:11:10.808 "nvme_admin": false, 00:11:10.808 "nvme_io": false, 00:11:10.808 "nvme_io_md": false, 00:11:10.808 "write_zeroes": true, 00:11:10.808 "zcopy": false, 00:11:10.808 "get_zone_info": false, 00:11:10.808 "zone_management": false, 00:11:10.808 "zone_append": false, 00:11:10.808 "compare": false, 00:11:10.808 "compare_and_write": false, 00:11:10.809 "abort": false, 00:11:10.809 "seek_hole": false, 00:11:10.809 "seek_data": false, 00:11:10.809 "copy": false, 00:11:10.809 "nvme_iov_md": false 00:11:10.809 }, 00:11:10.809 "memory_domains": [ 00:11:10.809 { 00:11:10.809 "dma_device_id": "system", 00:11:10.809 "dma_device_type": 1 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.809 "dma_device_type": 2 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "dma_device_id": "system", 00:11:10.809 "dma_device_type": 1 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.809 "dma_device_type": 2 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "dma_device_id": "system", 00:11:10.809 "dma_device_type": 1 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.809 "dma_device_type": 2 00:11:10.809 } 00:11:10.809 ], 00:11:10.809 "driver_specific": { 00:11:10.809 "raid": { 00:11:10.809 "uuid": "a0918e1d-c03a-4a13-b8eb-bb80df797893", 00:11:10.809 "strip_size_kb": 64, 00:11:10.809 "state": "online", 00:11:10.809 "raid_level": "raid0", 00:11:10.809 "superblock": true, 00:11:10.809 "num_base_bdevs": 3, 00:11:10.809 "num_base_bdevs_discovered": 3, 00:11:10.809 "num_base_bdevs_operational": 3, 00:11:10.809 "base_bdevs_list": [ 00:11:10.809 { 00:11:10.809 "name": "NewBaseBdev", 00:11:10.809 "uuid": "52ce8f23-70cd-4616-9fff-859e8a597898", 00:11:10.809 "is_configured": true, 00:11:10.809 "data_offset": 2048, 00:11:10.809 "data_size": 63488 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "name": "BaseBdev2", 00:11:10.809 "uuid": "b9e272f4-5d6d-46bb-80b0-63f059d21aa3", 00:11:10.809 "is_configured": true, 00:11:10.809 "data_offset": 2048, 00:11:10.809 "data_size": 63488 00:11:10.809 }, 00:11:10.809 { 00:11:10.809 "name": "BaseBdev3", 00:11:10.809 "uuid": "d2682f8f-6823-4160-8a6c-e8fdf9c5e102", 00:11:10.809 "is_configured": true, 00:11:10.809 "data_offset": 2048, 00:11:10.809 "data_size": 63488 00:11:10.809 } 00:11:10.809 ] 00:11:10.809 } 00:11:10.809 } 00:11:10.809 }' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:10.809 BaseBdev2 00:11:10.809 BaseBdev3' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.809 [2024-10-01 13:45:20.959983] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.809 [2024-10-01 13:45:20.960128] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.809 [2024-10-01 13:45:20.960292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.809 [2024-10-01 13:45:20.960431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.809 [2024-10-01 13:45:20.960588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64345 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64345 ']' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64345 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.809 13:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64345 00:11:11.067 killing process with pid 64345 00:11:11.067 13:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:11.067 13:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:11.067 13:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64345' 00:11:11.067 13:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64345 00:11:11.067 [2024-10-01 13:45:21.011207] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.067 13:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64345 00:11:11.326 [2024-10-01 13:45:21.320167] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.702 13:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.702 00:11:12.702 real 0m10.647s 00:11:12.702 user 0m16.779s 00:11:12.702 sys 0m2.133s 00:11:12.702 13:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.702 13:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.702 ************************************ 00:11:12.702 END TEST raid_state_function_test_sb 00:11:12.702 ************************************ 00:11:12.702 13:45:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:12.702 13:45:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.702 13:45:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.702 13:45:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.702 ************************************ 00:11:12.702 START TEST raid_superblock_test 00:11:12.702 ************************************ 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64965 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64965 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 64965 ']' 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.702 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.702 [2024-10-01 13:45:22.727033] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:12.702 [2024-10-01 13:45:22.727170] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64965 ] 00:11:12.961 [2024-10-01 13:45:22.893883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.961 [2024-10-01 13:45:23.148980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.220 [2024-10-01 13:45:23.366284] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.220 [2024-10-01 13:45:23.366338] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.486 malloc1 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.486 [2024-10-01 13:45:23.650268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.486 [2024-10-01 13:45:23.650466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.486 [2024-10-01 13:45:23.650527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:13.486 [2024-10-01 13:45:23.650637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.486 [2024-10-01 13:45:23.653251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.486 [2024-10-01 13:45:23.653412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.486 pt1 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.486 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.770 malloc2 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.770 [2024-10-01 13:45:23.719099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:13.770 [2024-10-01 13:45:23.719284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.770 [2024-10-01 13:45:23.719363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:13.770 [2024-10-01 13:45:23.719529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.770 [2024-10-01 13:45:23.721981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.770 [2024-10-01 13:45:23.722110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:13.770 pt2 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.770 malloc3 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.770 [2024-10-01 13:45:23.780869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:13.770 [2024-10-01 13:45:23.781031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.770 [2024-10-01 13:45:23.781091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:13.770 [2024-10-01 13:45:23.781159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.770 [2024-10-01 13:45:23.783760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.770 [2024-10-01 13:45:23.783903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:13.770 pt3 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.770 [2024-10-01 13:45:23.796921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:13.770 [2024-10-01 13:45:23.799177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:13.770 [2024-10-01 13:45:23.799378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:13.770 [2024-10-01 13:45:23.799596] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:13.770 [2024-10-01 13:45:23.799698] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:13.770 [2024-10-01 13:45:23.800008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:13.770 [2024-10-01 13:45:23.800203] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:13.770 [2024-10-01 13:45:23.800244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:13.770 [2024-10-01 13:45:23.800536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.770 "name": "raid_bdev1", 00:11:13.770 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:13.770 "strip_size_kb": 64, 00:11:13.770 "state": "online", 00:11:13.770 "raid_level": "raid0", 00:11:13.770 "superblock": true, 00:11:13.770 "num_base_bdevs": 3, 00:11:13.770 "num_base_bdevs_discovered": 3, 00:11:13.770 "num_base_bdevs_operational": 3, 00:11:13.770 "base_bdevs_list": [ 00:11:13.770 { 00:11:13.770 "name": "pt1", 00:11:13.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.770 "is_configured": true, 00:11:13.770 "data_offset": 2048, 00:11:13.770 "data_size": 63488 00:11:13.770 }, 00:11:13.770 { 00:11:13.770 "name": "pt2", 00:11:13.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.770 "is_configured": true, 00:11:13.770 "data_offset": 2048, 00:11:13.770 "data_size": 63488 00:11:13.770 }, 00:11:13.770 { 00:11:13.770 "name": "pt3", 00:11:13.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.770 "is_configured": true, 00:11:13.770 "data_offset": 2048, 00:11:13.770 "data_size": 63488 00:11:13.770 } 00:11:13.770 ] 00:11:13.770 }' 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.770 13:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.339 [2024-10-01 13:45:24.248643] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.339 "name": "raid_bdev1", 00:11:14.339 "aliases": [ 00:11:14.339 "57e0b987-e287-4471-b972-209c649e66b2" 00:11:14.339 ], 00:11:14.339 "product_name": "Raid Volume", 00:11:14.339 "block_size": 512, 00:11:14.339 "num_blocks": 190464, 00:11:14.339 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:14.339 "assigned_rate_limits": { 00:11:14.339 "rw_ios_per_sec": 0, 00:11:14.339 "rw_mbytes_per_sec": 0, 00:11:14.339 "r_mbytes_per_sec": 0, 00:11:14.339 "w_mbytes_per_sec": 0 00:11:14.339 }, 00:11:14.339 "claimed": false, 00:11:14.339 "zoned": false, 00:11:14.339 "supported_io_types": { 00:11:14.339 "read": true, 00:11:14.339 "write": true, 00:11:14.339 "unmap": true, 00:11:14.339 "flush": true, 00:11:14.339 "reset": true, 00:11:14.339 "nvme_admin": false, 00:11:14.339 "nvme_io": false, 00:11:14.339 "nvme_io_md": false, 00:11:14.339 "write_zeroes": true, 00:11:14.339 "zcopy": false, 00:11:14.339 "get_zone_info": false, 00:11:14.339 "zone_management": false, 00:11:14.339 "zone_append": false, 00:11:14.339 "compare": false, 00:11:14.339 "compare_and_write": false, 00:11:14.339 "abort": false, 00:11:14.339 "seek_hole": false, 00:11:14.339 "seek_data": false, 00:11:14.339 "copy": false, 00:11:14.339 "nvme_iov_md": false 00:11:14.339 }, 00:11:14.339 "memory_domains": [ 00:11:14.339 { 00:11:14.339 "dma_device_id": "system", 00:11:14.339 "dma_device_type": 1 00:11:14.339 }, 00:11:14.339 { 00:11:14.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.339 "dma_device_type": 2 00:11:14.339 }, 00:11:14.339 { 00:11:14.339 "dma_device_id": "system", 00:11:14.339 "dma_device_type": 1 00:11:14.339 }, 00:11:14.339 { 00:11:14.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.339 "dma_device_type": 2 00:11:14.339 }, 00:11:14.339 { 00:11:14.339 "dma_device_id": "system", 00:11:14.339 "dma_device_type": 1 00:11:14.339 }, 00:11:14.339 { 00:11:14.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.339 "dma_device_type": 2 00:11:14.339 } 00:11:14.339 ], 00:11:14.339 "driver_specific": { 00:11:14.339 "raid": { 00:11:14.339 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:14.339 "strip_size_kb": 64, 00:11:14.339 "state": "online", 00:11:14.339 "raid_level": "raid0", 00:11:14.339 "superblock": true, 00:11:14.339 "num_base_bdevs": 3, 00:11:14.339 "num_base_bdevs_discovered": 3, 00:11:14.339 "num_base_bdevs_operational": 3, 00:11:14.339 "base_bdevs_list": [ 00:11:14.339 { 00:11:14.339 "name": "pt1", 00:11:14.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.339 "is_configured": true, 00:11:14.339 "data_offset": 2048, 00:11:14.339 "data_size": 63488 00:11:14.339 }, 00:11:14.339 { 00:11:14.339 "name": "pt2", 00:11:14.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.339 "is_configured": true, 00:11:14.339 "data_offset": 2048, 00:11:14.339 "data_size": 63488 00:11:14.339 }, 00:11:14.339 { 00:11:14.339 "name": "pt3", 00:11:14.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.339 "is_configured": true, 00:11:14.339 "data_offset": 2048, 00:11:14.339 "data_size": 63488 00:11:14.339 } 00:11:14.339 ] 00:11:14.339 } 00:11:14.339 } 00:11:14.339 }' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:14.339 pt2 00:11:14.339 pt3' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.339 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.340 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.340 [2024-10-01 13:45:24.504215] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=57e0b987-e287-4471-b972-209c649e66b2 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 57e0b987-e287-4471-b972-209c649e66b2 ']' 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.599 [2024-10-01 13:45:24.547867] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.599 [2024-10-01 13:45:24.548012] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.599 [2024-10-01 13:45:24.548172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.599 [2024-10-01 13:45:24.548271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.599 [2024-10-01 13:45:24.548288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:14.599 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 [2024-10-01 13:45:24.687696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:14.600 [2024-10-01 13:45:24.690018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:14.600 [2024-10-01 13:45:24.690073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:14.600 [2024-10-01 13:45:24.690123] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:14.600 [2024-10-01 13:45:24.690180] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:14.600 [2024-10-01 13:45:24.690201] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:14.600 [2024-10-01 13:45:24.690222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.600 [2024-10-01 13:45:24.690232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:14.600 request: 00:11:14.600 { 00:11:14.600 "name": "raid_bdev1", 00:11:14.600 "raid_level": "raid0", 00:11:14.600 "base_bdevs": [ 00:11:14.600 "malloc1", 00:11:14.600 "malloc2", 00:11:14.600 "malloc3" 00:11:14.600 ], 00:11:14.600 "strip_size_kb": 64, 00:11:14.600 "superblock": false, 00:11:14.600 "method": "bdev_raid_create", 00:11:14.600 "req_id": 1 00:11:14.600 } 00:11:14.600 Got JSON-RPC error response 00:11:14.600 response: 00:11:14.600 { 00:11:14.600 "code": -17, 00:11:14.600 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:14.600 } 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 [2024-10-01 13:45:24.747567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:14.600 [2024-10-01 13:45:24.747742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.600 [2024-10-01 13:45:24.747818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:14.600 [2024-10-01 13:45:24.747893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.600 [2024-10-01 13:45:24.750464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.600 [2024-10-01 13:45:24.750598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:14.600 [2024-10-01 13:45:24.750764] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:14.600 [2024-10-01 13:45:24.750904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:14.600 pt1 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.600 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.861 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.861 "name": "raid_bdev1", 00:11:14.861 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:14.861 "strip_size_kb": 64, 00:11:14.861 "state": "configuring", 00:11:14.861 "raid_level": "raid0", 00:11:14.861 "superblock": true, 00:11:14.861 "num_base_bdevs": 3, 00:11:14.861 "num_base_bdevs_discovered": 1, 00:11:14.861 "num_base_bdevs_operational": 3, 00:11:14.861 "base_bdevs_list": [ 00:11:14.861 { 00:11:14.861 "name": "pt1", 00:11:14.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.861 "is_configured": true, 00:11:14.861 "data_offset": 2048, 00:11:14.861 "data_size": 63488 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "name": null, 00:11:14.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.861 "is_configured": false, 00:11:14.861 "data_offset": 2048, 00:11:14.861 "data_size": 63488 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "name": null, 00:11:14.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.861 "is_configured": false, 00:11:14.861 "data_offset": 2048, 00:11:14.861 "data_size": 63488 00:11:14.861 } 00:11:14.861 ] 00:11:14.861 }' 00:11:14.861 13:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.861 13:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.120 [2024-10-01 13:45:25.131412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.120 [2024-10-01 13:45:25.131600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.120 [2024-10-01 13:45:25.131663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:15.120 [2024-10-01 13:45:25.131754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.120 [2024-10-01 13:45:25.132235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.120 [2024-10-01 13:45:25.132255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.120 [2024-10-01 13:45:25.132342] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:15.120 [2024-10-01 13:45:25.132364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.120 pt2 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.120 [2024-10-01 13:45:25.139424] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.120 "name": "raid_bdev1", 00:11:15.120 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:15.120 "strip_size_kb": 64, 00:11:15.120 "state": "configuring", 00:11:15.120 "raid_level": "raid0", 00:11:15.120 "superblock": true, 00:11:15.120 "num_base_bdevs": 3, 00:11:15.120 "num_base_bdevs_discovered": 1, 00:11:15.120 "num_base_bdevs_operational": 3, 00:11:15.120 "base_bdevs_list": [ 00:11:15.120 { 00:11:15.120 "name": "pt1", 00:11:15.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.120 "is_configured": true, 00:11:15.120 "data_offset": 2048, 00:11:15.120 "data_size": 63488 00:11:15.120 }, 00:11:15.120 { 00:11:15.120 "name": null, 00:11:15.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.120 "is_configured": false, 00:11:15.120 "data_offset": 0, 00:11:15.120 "data_size": 63488 00:11:15.120 }, 00:11:15.120 { 00:11:15.120 "name": null, 00:11:15.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.120 "is_configured": false, 00:11:15.120 "data_offset": 2048, 00:11:15.120 "data_size": 63488 00:11:15.120 } 00:11:15.120 ] 00:11:15.120 }' 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.120 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.379 [2024-10-01 13:45:25.559381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.379 [2024-10-01 13:45:25.559594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.379 [2024-10-01 13:45:25.559669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:15.379 [2024-10-01 13:45:25.559760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.379 [2024-10-01 13:45:25.560295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.379 [2024-10-01 13:45:25.560466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.379 [2024-10-01 13:45:25.560575] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:15.379 [2024-10-01 13:45:25.560619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.379 pt2 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.379 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.638 [2024-10-01 13:45:25.571411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:15.638 [2024-10-01 13:45:25.571465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.638 [2024-10-01 13:45:25.571483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:15.638 [2024-10-01 13:45:25.571498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.638 [2024-10-01 13:45:25.571916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.639 [2024-10-01 13:45:25.571942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:15.639 [2024-10-01 13:45:25.572021] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:15.639 [2024-10-01 13:45:25.572046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:15.639 [2024-10-01 13:45:25.572175] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.639 [2024-10-01 13:45:25.572188] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:15.639 [2024-10-01 13:45:25.572483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:15.639 [2024-10-01 13:45:25.572640] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.639 [2024-10-01 13:45:25.572650] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:15.639 [2024-10-01 13:45:25.572803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.639 pt3 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.639 "name": "raid_bdev1", 00:11:15.639 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:15.639 "strip_size_kb": 64, 00:11:15.639 "state": "online", 00:11:15.639 "raid_level": "raid0", 00:11:15.639 "superblock": true, 00:11:15.639 "num_base_bdevs": 3, 00:11:15.639 "num_base_bdevs_discovered": 3, 00:11:15.639 "num_base_bdevs_operational": 3, 00:11:15.639 "base_bdevs_list": [ 00:11:15.639 { 00:11:15.639 "name": "pt1", 00:11:15.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.639 "is_configured": true, 00:11:15.639 "data_offset": 2048, 00:11:15.639 "data_size": 63488 00:11:15.639 }, 00:11:15.639 { 00:11:15.639 "name": "pt2", 00:11:15.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.639 "is_configured": true, 00:11:15.639 "data_offset": 2048, 00:11:15.639 "data_size": 63488 00:11:15.639 }, 00:11:15.639 { 00:11:15.639 "name": "pt3", 00:11:15.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.639 "is_configured": true, 00:11:15.639 "data_offset": 2048, 00:11:15.639 "data_size": 63488 00:11:15.639 } 00:11:15.639 ] 00:11:15.639 }' 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.639 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.899 13:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.899 [2024-10-01 13:45:25.999111] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.899 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.899 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.899 "name": "raid_bdev1", 00:11:15.899 "aliases": [ 00:11:15.899 "57e0b987-e287-4471-b972-209c649e66b2" 00:11:15.899 ], 00:11:15.899 "product_name": "Raid Volume", 00:11:15.899 "block_size": 512, 00:11:15.899 "num_blocks": 190464, 00:11:15.899 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:15.899 "assigned_rate_limits": { 00:11:15.899 "rw_ios_per_sec": 0, 00:11:15.899 "rw_mbytes_per_sec": 0, 00:11:15.899 "r_mbytes_per_sec": 0, 00:11:15.899 "w_mbytes_per_sec": 0 00:11:15.899 }, 00:11:15.899 "claimed": false, 00:11:15.899 "zoned": false, 00:11:15.899 "supported_io_types": { 00:11:15.899 "read": true, 00:11:15.899 "write": true, 00:11:15.899 "unmap": true, 00:11:15.899 "flush": true, 00:11:15.899 "reset": true, 00:11:15.899 "nvme_admin": false, 00:11:15.899 "nvme_io": false, 00:11:15.899 "nvme_io_md": false, 00:11:15.899 "write_zeroes": true, 00:11:15.899 "zcopy": false, 00:11:15.899 "get_zone_info": false, 00:11:15.899 "zone_management": false, 00:11:15.899 "zone_append": false, 00:11:15.899 "compare": false, 00:11:15.899 "compare_and_write": false, 00:11:15.899 "abort": false, 00:11:15.899 "seek_hole": false, 00:11:15.899 "seek_data": false, 00:11:15.899 "copy": false, 00:11:15.899 "nvme_iov_md": false 00:11:15.899 }, 00:11:15.899 "memory_domains": [ 00:11:15.899 { 00:11:15.899 "dma_device_id": "system", 00:11:15.899 "dma_device_type": 1 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.899 "dma_device_type": 2 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "dma_device_id": "system", 00:11:15.899 "dma_device_type": 1 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.899 "dma_device_type": 2 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "dma_device_id": "system", 00:11:15.899 "dma_device_type": 1 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.899 "dma_device_type": 2 00:11:15.899 } 00:11:15.899 ], 00:11:15.899 "driver_specific": { 00:11:15.899 "raid": { 00:11:15.899 "uuid": "57e0b987-e287-4471-b972-209c649e66b2", 00:11:15.899 "strip_size_kb": 64, 00:11:15.899 "state": "online", 00:11:15.899 "raid_level": "raid0", 00:11:15.899 "superblock": true, 00:11:15.899 "num_base_bdevs": 3, 00:11:15.899 "num_base_bdevs_discovered": 3, 00:11:15.899 "num_base_bdevs_operational": 3, 00:11:15.899 "base_bdevs_list": [ 00:11:15.899 { 00:11:15.899 "name": "pt1", 00:11:15.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.899 "is_configured": true, 00:11:15.899 "data_offset": 2048, 00:11:15.899 "data_size": 63488 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "name": "pt2", 00:11:15.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.899 "is_configured": true, 00:11:15.899 "data_offset": 2048, 00:11:15.899 "data_size": 63488 00:11:15.899 }, 00:11:15.899 { 00:11:15.899 "name": "pt3", 00:11:15.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.899 "is_configured": true, 00:11:15.899 "data_offset": 2048, 00:11:15.899 "data_size": 63488 00:11:15.899 } 00:11:15.899 ] 00:11:15.899 } 00:11:15.899 } 00:11:15.899 }' 00:11:15.899 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.899 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:15.899 pt2 00:11:15.899 pt3' 00:11:15.899 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.159 [2024-10-01 13:45:26.262719] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 57e0b987-e287-4471-b972-209c649e66b2 '!=' 57e0b987-e287-4471-b972-209c649e66b2 ']' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64965 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 64965 ']' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 64965 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64965 00:11:16.159 killing process with pid 64965 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64965' 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 64965 00:11:16.159 [2024-10-01 13:45:26.333039] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.159 13:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 64965 00:11:16.159 [2024-10-01 13:45:26.333155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.159 [2024-10-01 13:45:26.333218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.159 [2024-10-01 13:45:26.333233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:16.726 [2024-10-01 13:45:26.639125] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.753 13:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:17.753 00:11:17.753 real 0m5.295s 00:11:17.753 user 0m7.433s 00:11:17.753 sys 0m1.034s 00:11:17.753 13:45:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.753 ************************************ 00:11:17.753 END TEST raid_superblock_test 00:11:17.753 ************************************ 00:11:17.753 13:45:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.012 13:45:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:18.012 13:45:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:18.012 13:45:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.012 13:45:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.012 ************************************ 00:11:18.012 START TEST raid_read_error_test 00:11:18.012 ************************************ 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g6aIFB3PvV 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65218 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65218 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65218 ']' 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.012 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.012 [2024-10-01 13:45:28.131542] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:18.012 [2024-10-01 13:45:28.131699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65218 ] 00:11:18.271 [2024-10-01 13:45:28.305202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.530 [2024-10-01 13:45:28.525205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.790 [2024-10-01 13:45:28.726613] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.790 [2024-10-01 13:45:28.726679] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.790 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.790 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:18.790 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:18.790 13:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:18.790 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.790 13:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.048 BaseBdev1_malloc 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.048 true 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.048 [2024-10-01 13:45:29.034154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.048 [2024-10-01 13:45:29.034344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.048 [2024-10-01 13:45:29.034410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:19.048 [2024-10-01 13:45:29.034540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.048 [2024-10-01 13:45:29.037017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.048 [2024-10-01 13:45:29.037168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.048 BaseBdev1 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.048 BaseBdev2_malloc 00:11:19.048 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.049 true 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.049 [2024-10-01 13:45:29.112997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.049 [2024-10-01 13:45:29.113165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.049 [2024-10-01 13:45:29.113218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:19.049 [2024-10-01 13:45:29.113352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.049 [2024-10-01 13:45:29.115781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.049 [2024-10-01 13:45:29.115922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.049 BaseBdev2 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.049 BaseBdev3_malloc 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.049 true 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.049 [2024-10-01 13:45:29.190099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:19.049 [2024-10-01 13:45:29.190276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.049 [2024-10-01 13:45:29.190333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:19.049 [2024-10-01 13:45:29.190424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.049 [2024-10-01 13:45:29.193180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.049 [2024-10-01 13:45:29.193330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:19.049 BaseBdev3 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.049 [2024-10-01 13:45:29.202199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.049 [2024-10-01 13:45:29.204571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.049 [2024-10-01 13:45:29.204802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.049 [2024-10-01 13:45:29.205047] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:19.049 [2024-10-01 13:45:29.205093] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:19.049 [2024-10-01 13:45:29.205550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:19.049 [2024-10-01 13:45:29.205762] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:19.049 [2024-10-01 13:45:29.205778] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:19.049 [2024-10-01 13:45:29.205955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.049 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.307 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.307 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.307 "name": "raid_bdev1", 00:11:19.307 "uuid": "371555e5-a5ea-4f9a-b21e-2c5537b59726", 00:11:19.307 "strip_size_kb": 64, 00:11:19.308 "state": "online", 00:11:19.308 "raid_level": "raid0", 00:11:19.308 "superblock": true, 00:11:19.308 "num_base_bdevs": 3, 00:11:19.308 "num_base_bdevs_discovered": 3, 00:11:19.308 "num_base_bdevs_operational": 3, 00:11:19.308 "base_bdevs_list": [ 00:11:19.308 { 00:11:19.308 "name": "BaseBdev1", 00:11:19.308 "uuid": "75fbca1d-64a8-5cea-ae87-d0cf2c36ce82", 00:11:19.308 "is_configured": true, 00:11:19.308 "data_offset": 2048, 00:11:19.308 "data_size": 63488 00:11:19.308 }, 00:11:19.308 { 00:11:19.308 "name": "BaseBdev2", 00:11:19.308 "uuid": "c5f225f2-a91b-5fa5-9a02-49955eb0f143", 00:11:19.308 "is_configured": true, 00:11:19.308 "data_offset": 2048, 00:11:19.308 "data_size": 63488 00:11:19.308 }, 00:11:19.308 { 00:11:19.308 "name": "BaseBdev3", 00:11:19.308 "uuid": "8120f59c-f3da-5339-ba7e-3452b1ca8c6d", 00:11:19.308 "is_configured": true, 00:11:19.308 "data_offset": 2048, 00:11:19.308 "data_size": 63488 00:11:19.308 } 00:11:19.308 ] 00:11:19.308 }' 00:11:19.308 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.308 13:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.565 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:19.565 13:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:19.565 [2024-10-01 13:45:29.706932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.499 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.500 13:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.500 13:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.500 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.500 "name": "raid_bdev1", 00:11:20.500 "uuid": "371555e5-a5ea-4f9a-b21e-2c5537b59726", 00:11:20.500 "strip_size_kb": 64, 00:11:20.500 "state": "online", 00:11:20.500 "raid_level": "raid0", 00:11:20.500 "superblock": true, 00:11:20.500 "num_base_bdevs": 3, 00:11:20.500 "num_base_bdevs_discovered": 3, 00:11:20.500 "num_base_bdevs_operational": 3, 00:11:20.500 "base_bdevs_list": [ 00:11:20.500 { 00:11:20.500 "name": "BaseBdev1", 00:11:20.500 "uuid": "75fbca1d-64a8-5cea-ae87-d0cf2c36ce82", 00:11:20.500 "is_configured": true, 00:11:20.500 "data_offset": 2048, 00:11:20.500 "data_size": 63488 00:11:20.500 }, 00:11:20.500 { 00:11:20.500 "name": "BaseBdev2", 00:11:20.500 "uuid": "c5f225f2-a91b-5fa5-9a02-49955eb0f143", 00:11:20.500 "is_configured": true, 00:11:20.500 "data_offset": 2048, 00:11:20.500 "data_size": 63488 00:11:20.500 }, 00:11:20.500 { 00:11:20.500 "name": "BaseBdev3", 00:11:20.500 "uuid": "8120f59c-f3da-5339-ba7e-3452b1ca8c6d", 00:11:20.500 "is_configured": true, 00:11:20.500 "data_offset": 2048, 00:11:20.500 "data_size": 63488 00:11:20.500 } 00:11:20.500 ] 00:11:20.500 }' 00:11:20.500 13:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.500 13:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 [2024-10-01 13:45:31.045646] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.066 [2024-10-01 13:45:31.045802] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.066 [2024-10-01 13:45:31.048535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.066 [2024-10-01 13:45:31.048584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.066 [2024-10-01 13:45:31.048622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.066 [2024-10-01 13:45:31.048633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:21.066 { 00:11:21.066 "results": [ 00:11:21.066 { 00:11:21.066 "job": "raid_bdev1", 00:11:21.066 "core_mask": "0x1", 00:11:21.066 "workload": "randrw", 00:11:21.066 "percentage": 50, 00:11:21.066 "status": "finished", 00:11:21.066 "queue_depth": 1, 00:11:21.066 "io_size": 131072, 00:11:21.066 "runtime": 1.338674, 00:11:21.066 "iops": 16318.38670206488, 00:11:21.066 "mibps": 2039.79833775811, 00:11:21.066 "io_failed": 1, 00:11:21.066 "io_timeout": 0, 00:11:21.066 "avg_latency_us": 84.77629319806003, 00:11:21.066 "min_latency_us": 20.356626506024096, 00:11:21.066 "max_latency_us": 1427.8425702811246 00:11:21.066 } 00:11:21.066 ], 00:11:21.066 "core_count": 1 00:11:21.066 } 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65218 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65218 ']' 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65218 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65218 00:11:21.066 killing process with pid 65218 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65218' 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65218 00:11:21.066 [2024-10-01 13:45:31.095507] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.066 13:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65218 00:11:21.326 [2024-10-01 13:45:31.328608] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g6aIFB3PvV 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:22.700 ************************************ 00:11:22.700 END TEST raid_read_error_test 00:11:22.700 ************************************ 00:11:22.700 00:11:22.700 real 0m4.667s 00:11:22.700 user 0m5.454s 00:11:22.700 sys 0m0.606s 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.700 13:45:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.700 13:45:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:22.700 13:45:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:22.700 13:45:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.700 13:45:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.700 ************************************ 00:11:22.700 START TEST raid_write_error_test 00:11:22.700 ************************************ 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:22.700 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JNA2XOltf8 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65364 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65364 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65364 ']' 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.701 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.701 [2024-10-01 13:45:32.839551] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:22.701 [2024-10-01 13:45:32.840086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65364 ] 00:11:22.959 [2024-10-01 13:45:33.001179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.217 [2024-10-01 13:45:33.238253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.474 [2024-10-01 13:45:33.451060] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.475 [2024-10-01 13:45:33.451119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.475 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.475 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:23.475 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.475 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.475 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.475 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 BaseBdev1_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 true 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 [2024-10-01 13:45:33.725906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.734 [2024-10-01 13:45:33.726091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.734 [2024-10-01 13:45:33.726200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:23.734 [2024-10-01 13:45:33.726222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.734 [2024-10-01 13:45:33.728746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.734 [2024-10-01 13:45:33.728790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.734 BaseBdev1 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 BaseBdev2_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 true 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 [2024-10-01 13:45:33.803086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.734 [2024-10-01 13:45:33.803251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.734 [2024-10-01 13:45:33.803319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.734 [2024-10-01 13:45:33.803576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.734 [2024-10-01 13:45:33.806157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.734 [2024-10-01 13:45:33.806316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.734 BaseBdev2 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 BaseBdev3_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 true 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 [2024-10-01 13:45:33.867922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:23.734 [2024-10-01 13:45:33.867975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.734 [2024-10-01 13:45:33.867993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:23.734 [2024-10-01 13:45:33.868007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.734 [2024-10-01 13:45:33.870473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.734 [2024-10-01 13:45:33.870512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:23.734 BaseBdev3 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 [2024-10-01 13:45:33.879989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.734 [2024-10-01 13:45:33.882276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.735 [2024-10-01 13:45:33.882471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.735 [2024-10-01 13:45:33.882702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:23.735 [2024-10-01 13:45:33.882789] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:23.735 [2024-10-01 13:45:33.883187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:23.735 [2024-10-01 13:45:33.883492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:23.735 [2024-10-01 13:45:33.883548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:23.735 [2024-10-01 13:45:33.883818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.735 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.993 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.993 "name": "raid_bdev1", 00:11:23.993 "uuid": "4f349663-5e55-45bb-b323-4b696deea799", 00:11:23.993 "strip_size_kb": 64, 00:11:23.993 "state": "online", 00:11:23.993 "raid_level": "raid0", 00:11:23.993 "superblock": true, 00:11:23.993 "num_base_bdevs": 3, 00:11:23.993 "num_base_bdevs_discovered": 3, 00:11:23.993 "num_base_bdevs_operational": 3, 00:11:23.993 "base_bdevs_list": [ 00:11:23.993 { 00:11:23.993 "name": "BaseBdev1", 00:11:23.993 "uuid": "cca5d16f-b8f7-5e33-8f94-3dbc81c29ae5", 00:11:23.993 "is_configured": true, 00:11:23.993 "data_offset": 2048, 00:11:23.993 "data_size": 63488 00:11:23.993 }, 00:11:23.993 { 00:11:23.993 "name": "BaseBdev2", 00:11:23.993 "uuid": "2e2cd636-750f-5497-acac-5f2c1f30f22e", 00:11:23.993 "is_configured": true, 00:11:23.993 "data_offset": 2048, 00:11:23.993 "data_size": 63488 00:11:23.993 }, 00:11:23.993 { 00:11:23.993 "name": "BaseBdev3", 00:11:23.993 "uuid": "95602ee6-f924-5e81-a6d0-c946b5820278", 00:11:23.993 "is_configured": true, 00:11:23.993 "data_offset": 2048, 00:11:23.993 "data_size": 63488 00:11:23.993 } 00:11:23.993 ] 00:11:23.993 }' 00:11:23.993 13:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.993 13:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.252 13:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.252 13:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.252 [2024-10-01 13:45:34.336823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.215 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.216 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.216 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.216 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.216 "name": "raid_bdev1", 00:11:25.216 "uuid": "4f349663-5e55-45bb-b323-4b696deea799", 00:11:25.216 "strip_size_kb": 64, 00:11:25.216 "state": "online", 00:11:25.216 "raid_level": "raid0", 00:11:25.216 "superblock": true, 00:11:25.216 "num_base_bdevs": 3, 00:11:25.216 "num_base_bdevs_discovered": 3, 00:11:25.216 "num_base_bdevs_operational": 3, 00:11:25.216 "base_bdevs_list": [ 00:11:25.216 { 00:11:25.216 "name": "BaseBdev1", 00:11:25.216 "uuid": "cca5d16f-b8f7-5e33-8f94-3dbc81c29ae5", 00:11:25.216 "is_configured": true, 00:11:25.216 "data_offset": 2048, 00:11:25.216 "data_size": 63488 00:11:25.216 }, 00:11:25.216 { 00:11:25.216 "name": "BaseBdev2", 00:11:25.216 "uuid": "2e2cd636-750f-5497-acac-5f2c1f30f22e", 00:11:25.216 "is_configured": true, 00:11:25.216 "data_offset": 2048, 00:11:25.216 "data_size": 63488 00:11:25.216 }, 00:11:25.216 { 00:11:25.216 "name": "BaseBdev3", 00:11:25.216 "uuid": "95602ee6-f924-5e81-a6d0-c946b5820278", 00:11:25.216 "is_configured": true, 00:11:25.216 "data_offset": 2048, 00:11:25.216 "data_size": 63488 00:11:25.216 } 00:11:25.216 ] 00:11:25.216 }' 00:11:25.216 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.216 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.474 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.474 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.474 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.474 [2024-10-01 13:45:35.665317] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.733 [2024-10-01 13:45:35.665502] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.733 [2024-10-01 13:45:35.668193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.733 [2024-10-01 13:45:35.668237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.733 [2024-10-01 13:45:35.668285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.733 [2024-10-01 13:45:35.668297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:25.733 { 00:11:25.733 "results": [ 00:11:25.733 { 00:11:25.733 "job": "raid_bdev1", 00:11:25.733 "core_mask": "0x1", 00:11:25.733 "workload": "randrw", 00:11:25.733 "percentage": 50, 00:11:25.733 "status": "finished", 00:11:25.733 "queue_depth": 1, 00:11:25.733 "io_size": 131072, 00:11:25.733 "runtime": 1.328507, 00:11:25.733 "iops": 16215.194951927238, 00:11:25.733 "mibps": 2026.8993689909048, 00:11:25.733 "io_failed": 1, 00:11:25.733 "io_timeout": 0, 00:11:25.733 "avg_latency_us": 85.35990352348446, 00:11:25.733 "min_latency_us": 18.094779116465862, 00:11:25.733 "max_latency_us": 1572.6008032128514 00:11:25.733 } 00:11:25.733 ], 00:11:25.733 "core_count": 1 00:11:25.733 } 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65364 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65364 ']' 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65364 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65364 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65364' 00:11:25.733 killing process with pid 65364 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65364 00:11:25.733 [2024-10-01 13:45:35.721899] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.733 13:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65364 00:11:25.991 [2024-10-01 13:45:35.954765] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JNA2XOltf8 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:27.368 ************************************ 00:11:27.368 END TEST raid_write_error_test 00:11:27.368 ************************************ 00:11:27.368 00:11:27.368 real 0m4.563s 00:11:27.368 user 0m5.247s 00:11:27.368 sys 0m0.586s 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.368 13:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.368 13:45:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:27.368 13:45:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:27.368 13:45:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:27.368 13:45:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.368 13:45:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.368 ************************************ 00:11:27.368 START TEST raid_state_function_test 00:11:27.368 ************************************ 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65502 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65502' 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:27.368 Process raid pid: 65502 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65502 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65502 ']' 00:11:27.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.368 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.368 [2024-10-01 13:45:37.480592] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:27.368 [2024-10-01 13:45:37.480735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.625 [2024-10-01 13:45:37.653459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.883 [2024-10-01 13:45:37.868681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.883 [2024-10-01 13:45:38.065710] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.883 [2024-10-01 13:45:38.065944] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.143 [2024-10-01 13:45:38.321648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.143 [2024-10-01 13:45:38.321847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.143 [2024-10-01 13:45:38.321954] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.143 [2024-10-01 13:45:38.322000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.143 [2024-10-01 13:45:38.322029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.143 [2024-10-01 13:45:38.322061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.143 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.402 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.402 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.402 "name": "Existed_Raid", 00:11:28.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.402 "strip_size_kb": 64, 00:11:28.402 "state": "configuring", 00:11:28.402 "raid_level": "concat", 00:11:28.402 "superblock": false, 00:11:28.402 "num_base_bdevs": 3, 00:11:28.402 "num_base_bdevs_discovered": 0, 00:11:28.402 "num_base_bdevs_operational": 3, 00:11:28.402 "base_bdevs_list": [ 00:11:28.402 { 00:11:28.402 "name": "BaseBdev1", 00:11:28.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.402 "is_configured": false, 00:11:28.402 "data_offset": 0, 00:11:28.402 "data_size": 0 00:11:28.402 }, 00:11:28.402 { 00:11:28.402 "name": "BaseBdev2", 00:11:28.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.402 "is_configured": false, 00:11:28.402 "data_offset": 0, 00:11:28.402 "data_size": 0 00:11:28.402 }, 00:11:28.402 { 00:11:28.402 "name": "BaseBdev3", 00:11:28.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.402 "is_configured": false, 00:11:28.402 "data_offset": 0, 00:11:28.402 "data_size": 0 00:11:28.402 } 00:11:28.402 ] 00:11:28.402 }' 00:11:28.402 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.402 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.661 [2024-10-01 13:45:38.733136] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.661 [2024-10-01 13:45:38.733180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.661 [2024-10-01 13:45:38.745120] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.661 [2024-10-01 13:45:38.745286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.661 [2024-10-01 13:45:38.745306] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.661 [2024-10-01 13:45:38.745320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.661 [2024-10-01 13:45:38.745328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.661 [2024-10-01 13:45:38.745340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.661 [2024-10-01 13:45:38.799782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.661 BaseBdev1 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.661 [ 00:11:28.661 { 00:11:28.661 "name": "BaseBdev1", 00:11:28.661 "aliases": [ 00:11:28.661 "bec44fca-c273-48bd-90bf-1c4f0d4f8d51" 00:11:28.661 ], 00:11:28.661 "product_name": "Malloc disk", 00:11:28.661 "block_size": 512, 00:11:28.661 "num_blocks": 65536, 00:11:28.661 "uuid": "bec44fca-c273-48bd-90bf-1c4f0d4f8d51", 00:11:28.661 "assigned_rate_limits": { 00:11:28.661 "rw_ios_per_sec": 0, 00:11:28.661 "rw_mbytes_per_sec": 0, 00:11:28.661 "r_mbytes_per_sec": 0, 00:11:28.661 "w_mbytes_per_sec": 0 00:11:28.661 }, 00:11:28.661 "claimed": true, 00:11:28.661 "claim_type": "exclusive_write", 00:11:28.661 "zoned": false, 00:11:28.661 "supported_io_types": { 00:11:28.661 "read": true, 00:11:28.661 "write": true, 00:11:28.661 "unmap": true, 00:11:28.661 "flush": true, 00:11:28.661 "reset": true, 00:11:28.661 "nvme_admin": false, 00:11:28.661 "nvme_io": false, 00:11:28.661 "nvme_io_md": false, 00:11:28.661 "write_zeroes": true, 00:11:28.661 "zcopy": true, 00:11:28.661 "get_zone_info": false, 00:11:28.661 "zone_management": false, 00:11:28.661 "zone_append": false, 00:11:28.661 "compare": false, 00:11:28.661 "compare_and_write": false, 00:11:28.661 "abort": true, 00:11:28.661 "seek_hole": false, 00:11:28.661 "seek_data": false, 00:11:28.661 "copy": true, 00:11:28.661 "nvme_iov_md": false 00:11:28.661 }, 00:11:28.661 "memory_domains": [ 00:11:28.661 { 00:11:28.661 "dma_device_id": "system", 00:11:28.661 "dma_device_type": 1 00:11:28.661 }, 00:11:28.661 { 00:11:28.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.661 "dma_device_type": 2 00:11:28.661 } 00:11:28.661 ], 00:11:28.661 "driver_specific": {} 00:11:28.661 } 00:11:28.661 ] 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.661 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.920 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.920 "name": "Existed_Raid", 00:11:28.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.920 "strip_size_kb": 64, 00:11:28.920 "state": "configuring", 00:11:28.920 "raid_level": "concat", 00:11:28.920 "superblock": false, 00:11:28.920 "num_base_bdevs": 3, 00:11:28.921 "num_base_bdevs_discovered": 1, 00:11:28.921 "num_base_bdevs_operational": 3, 00:11:28.921 "base_bdevs_list": [ 00:11:28.921 { 00:11:28.921 "name": "BaseBdev1", 00:11:28.921 "uuid": "bec44fca-c273-48bd-90bf-1c4f0d4f8d51", 00:11:28.921 "is_configured": true, 00:11:28.921 "data_offset": 0, 00:11:28.921 "data_size": 65536 00:11:28.921 }, 00:11:28.921 { 00:11:28.921 "name": "BaseBdev2", 00:11:28.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.921 "is_configured": false, 00:11:28.921 "data_offset": 0, 00:11:28.921 "data_size": 0 00:11:28.921 }, 00:11:28.921 { 00:11:28.921 "name": "BaseBdev3", 00:11:28.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.921 "is_configured": false, 00:11:28.921 "data_offset": 0, 00:11:28.921 "data_size": 0 00:11:28.921 } 00:11:28.921 ] 00:11:28.921 }' 00:11:28.921 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.921 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 [2024-10-01 13:45:39.299420] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.214 [2024-10-01 13:45:39.299587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 [2024-10-01 13:45:39.311452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.214 [2024-10-01 13:45:39.313611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.214 [2024-10-01 13:45:39.313659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.214 [2024-10-01 13:45:39.313670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.214 [2024-10-01 13:45:39.313699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.214 "name": "Existed_Raid", 00:11:29.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.214 "strip_size_kb": 64, 00:11:29.214 "state": "configuring", 00:11:29.214 "raid_level": "concat", 00:11:29.214 "superblock": false, 00:11:29.214 "num_base_bdevs": 3, 00:11:29.214 "num_base_bdevs_discovered": 1, 00:11:29.214 "num_base_bdevs_operational": 3, 00:11:29.214 "base_bdevs_list": [ 00:11:29.214 { 00:11:29.214 "name": "BaseBdev1", 00:11:29.214 "uuid": "bec44fca-c273-48bd-90bf-1c4f0d4f8d51", 00:11:29.214 "is_configured": true, 00:11:29.214 "data_offset": 0, 00:11:29.214 "data_size": 65536 00:11:29.214 }, 00:11:29.214 { 00:11:29.214 "name": "BaseBdev2", 00:11:29.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.214 "is_configured": false, 00:11:29.214 "data_offset": 0, 00:11:29.214 "data_size": 0 00:11:29.214 }, 00:11:29.214 { 00:11:29.214 "name": "BaseBdev3", 00:11:29.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.214 "is_configured": false, 00:11:29.214 "data_offset": 0, 00:11:29.214 "data_size": 0 00:11:29.214 } 00:11:29.214 ] 00:11:29.214 }' 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.214 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.783 [2024-10-01 13:45:39.796700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.783 BaseBdev2 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.783 [ 00:11:29.783 { 00:11:29.783 "name": "BaseBdev2", 00:11:29.783 "aliases": [ 00:11:29.783 "a6d3d4af-b3a5-4499-9765-a2ba06089380" 00:11:29.783 ], 00:11:29.783 "product_name": "Malloc disk", 00:11:29.783 "block_size": 512, 00:11:29.783 "num_blocks": 65536, 00:11:29.783 "uuid": "a6d3d4af-b3a5-4499-9765-a2ba06089380", 00:11:29.783 "assigned_rate_limits": { 00:11:29.783 "rw_ios_per_sec": 0, 00:11:29.783 "rw_mbytes_per_sec": 0, 00:11:29.783 "r_mbytes_per_sec": 0, 00:11:29.783 "w_mbytes_per_sec": 0 00:11:29.783 }, 00:11:29.783 "claimed": true, 00:11:29.783 "claim_type": "exclusive_write", 00:11:29.783 "zoned": false, 00:11:29.783 "supported_io_types": { 00:11:29.783 "read": true, 00:11:29.783 "write": true, 00:11:29.783 "unmap": true, 00:11:29.783 "flush": true, 00:11:29.783 "reset": true, 00:11:29.783 "nvme_admin": false, 00:11:29.783 "nvme_io": false, 00:11:29.783 "nvme_io_md": false, 00:11:29.783 "write_zeroes": true, 00:11:29.783 "zcopy": true, 00:11:29.783 "get_zone_info": false, 00:11:29.783 "zone_management": false, 00:11:29.783 "zone_append": false, 00:11:29.783 "compare": false, 00:11:29.783 "compare_and_write": false, 00:11:29.783 "abort": true, 00:11:29.783 "seek_hole": false, 00:11:29.783 "seek_data": false, 00:11:29.783 "copy": true, 00:11:29.783 "nvme_iov_md": false 00:11:29.783 }, 00:11:29.783 "memory_domains": [ 00:11:29.783 { 00:11:29.783 "dma_device_id": "system", 00:11:29.783 "dma_device_type": 1 00:11:29.783 }, 00:11:29.783 { 00:11:29.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.783 "dma_device_type": 2 00:11:29.783 } 00:11:29.783 ], 00:11:29.783 "driver_specific": {} 00:11:29.783 } 00:11:29.783 ] 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.783 "name": "Existed_Raid", 00:11:29.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.783 "strip_size_kb": 64, 00:11:29.783 "state": "configuring", 00:11:29.783 "raid_level": "concat", 00:11:29.783 "superblock": false, 00:11:29.783 "num_base_bdevs": 3, 00:11:29.783 "num_base_bdevs_discovered": 2, 00:11:29.783 "num_base_bdevs_operational": 3, 00:11:29.783 "base_bdevs_list": [ 00:11:29.783 { 00:11:29.783 "name": "BaseBdev1", 00:11:29.783 "uuid": "bec44fca-c273-48bd-90bf-1c4f0d4f8d51", 00:11:29.783 "is_configured": true, 00:11:29.783 "data_offset": 0, 00:11:29.783 "data_size": 65536 00:11:29.783 }, 00:11:29.783 { 00:11:29.783 "name": "BaseBdev2", 00:11:29.783 "uuid": "a6d3d4af-b3a5-4499-9765-a2ba06089380", 00:11:29.783 "is_configured": true, 00:11:29.783 "data_offset": 0, 00:11:29.783 "data_size": 65536 00:11:29.783 }, 00:11:29.783 { 00:11:29.783 "name": "BaseBdev3", 00:11:29.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.783 "is_configured": false, 00:11:29.783 "data_offset": 0, 00:11:29.783 "data_size": 0 00:11:29.783 } 00:11:29.783 ] 00:11:29.783 }' 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.783 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.351 [2024-10-01 13:45:40.309273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.351 BaseBdev3 00:11:30.351 [2024-10-01 13:45:40.309582] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:30.351 [2024-10-01 13:45:40.309616] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:30.351 [2024-10-01 13:45:40.309931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:30.351 [2024-10-01 13:45:40.310133] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:30.351 [2024-10-01 13:45:40.310147] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:30.351 [2024-10-01 13:45:40.310453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.351 [ 00:11:30.351 { 00:11:30.351 "name": "BaseBdev3", 00:11:30.351 "aliases": [ 00:11:30.351 "cf79c632-58f9-45c8-af25-02833fcb7599" 00:11:30.351 ], 00:11:30.351 "product_name": "Malloc disk", 00:11:30.351 "block_size": 512, 00:11:30.351 "num_blocks": 65536, 00:11:30.351 "uuid": "cf79c632-58f9-45c8-af25-02833fcb7599", 00:11:30.351 "assigned_rate_limits": { 00:11:30.351 "rw_ios_per_sec": 0, 00:11:30.351 "rw_mbytes_per_sec": 0, 00:11:30.351 "r_mbytes_per_sec": 0, 00:11:30.351 "w_mbytes_per_sec": 0 00:11:30.351 }, 00:11:30.351 "claimed": true, 00:11:30.351 "claim_type": "exclusive_write", 00:11:30.351 "zoned": false, 00:11:30.351 "supported_io_types": { 00:11:30.351 "read": true, 00:11:30.351 "write": true, 00:11:30.351 "unmap": true, 00:11:30.351 "flush": true, 00:11:30.351 "reset": true, 00:11:30.351 "nvme_admin": false, 00:11:30.351 "nvme_io": false, 00:11:30.351 "nvme_io_md": false, 00:11:30.351 "write_zeroes": true, 00:11:30.351 "zcopy": true, 00:11:30.351 "get_zone_info": false, 00:11:30.351 "zone_management": false, 00:11:30.351 "zone_append": false, 00:11:30.351 "compare": false, 00:11:30.351 "compare_and_write": false, 00:11:30.351 "abort": true, 00:11:30.351 "seek_hole": false, 00:11:30.351 "seek_data": false, 00:11:30.351 "copy": true, 00:11:30.351 "nvme_iov_md": false 00:11:30.351 }, 00:11:30.351 "memory_domains": [ 00:11:30.351 { 00:11:30.351 "dma_device_id": "system", 00:11:30.351 "dma_device_type": 1 00:11:30.351 }, 00:11:30.351 { 00:11:30.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.351 "dma_device_type": 2 00:11:30.351 } 00:11:30.351 ], 00:11:30.351 "driver_specific": {} 00:11:30.351 } 00:11:30.351 ] 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.351 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.351 "name": "Existed_Raid", 00:11:30.351 "uuid": "fc39e921-b0b8-465b-b27e-a6d388ac7587", 00:11:30.351 "strip_size_kb": 64, 00:11:30.351 "state": "online", 00:11:30.351 "raid_level": "concat", 00:11:30.351 "superblock": false, 00:11:30.351 "num_base_bdevs": 3, 00:11:30.351 "num_base_bdevs_discovered": 3, 00:11:30.351 "num_base_bdevs_operational": 3, 00:11:30.351 "base_bdevs_list": [ 00:11:30.351 { 00:11:30.351 "name": "BaseBdev1", 00:11:30.351 "uuid": "bec44fca-c273-48bd-90bf-1c4f0d4f8d51", 00:11:30.351 "is_configured": true, 00:11:30.351 "data_offset": 0, 00:11:30.351 "data_size": 65536 00:11:30.351 }, 00:11:30.351 { 00:11:30.351 "name": "BaseBdev2", 00:11:30.351 "uuid": "a6d3d4af-b3a5-4499-9765-a2ba06089380", 00:11:30.351 "is_configured": true, 00:11:30.351 "data_offset": 0, 00:11:30.351 "data_size": 65536 00:11:30.352 }, 00:11:30.352 { 00:11:30.352 "name": "BaseBdev3", 00:11:30.352 "uuid": "cf79c632-58f9-45c8-af25-02833fcb7599", 00:11:30.352 "is_configured": true, 00:11:30.352 "data_offset": 0, 00:11:30.352 "data_size": 65536 00:11:30.352 } 00:11:30.352 ] 00:11:30.352 }' 00:11:30.352 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.352 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.919 [2024-10-01 13:45:40.828942] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.919 "name": "Existed_Raid", 00:11:30.919 "aliases": [ 00:11:30.919 "fc39e921-b0b8-465b-b27e-a6d388ac7587" 00:11:30.919 ], 00:11:30.919 "product_name": "Raid Volume", 00:11:30.919 "block_size": 512, 00:11:30.919 "num_blocks": 196608, 00:11:30.919 "uuid": "fc39e921-b0b8-465b-b27e-a6d388ac7587", 00:11:30.919 "assigned_rate_limits": { 00:11:30.919 "rw_ios_per_sec": 0, 00:11:30.919 "rw_mbytes_per_sec": 0, 00:11:30.919 "r_mbytes_per_sec": 0, 00:11:30.919 "w_mbytes_per_sec": 0 00:11:30.919 }, 00:11:30.919 "claimed": false, 00:11:30.919 "zoned": false, 00:11:30.919 "supported_io_types": { 00:11:30.919 "read": true, 00:11:30.919 "write": true, 00:11:30.919 "unmap": true, 00:11:30.919 "flush": true, 00:11:30.919 "reset": true, 00:11:30.919 "nvme_admin": false, 00:11:30.919 "nvme_io": false, 00:11:30.919 "nvme_io_md": false, 00:11:30.919 "write_zeroes": true, 00:11:30.919 "zcopy": false, 00:11:30.919 "get_zone_info": false, 00:11:30.919 "zone_management": false, 00:11:30.919 "zone_append": false, 00:11:30.919 "compare": false, 00:11:30.919 "compare_and_write": false, 00:11:30.919 "abort": false, 00:11:30.919 "seek_hole": false, 00:11:30.919 "seek_data": false, 00:11:30.919 "copy": false, 00:11:30.919 "nvme_iov_md": false 00:11:30.919 }, 00:11:30.919 "memory_domains": [ 00:11:30.919 { 00:11:30.919 "dma_device_id": "system", 00:11:30.919 "dma_device_type": 1 00:11:30.919 }, 00:11:30.919 { 00:11:30.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.919 "dma_device_type": 2 00:11:30.919 }, 00:11:30.919 { 00:11:30.919 "dma_device_id": "system", 00:11:30.919 "dma_device_type": 1 00:11:30.919 }, 00:11:30.919 { 00:11:30.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.919 "dma_device_type": 2 00:11:30.919 }, 00:11:30.919 { 00:11:30.919 "dma_device_id": "system", 00:11:30.919 "dma_device_type": 1 00:11:30.919 }, 00:11:30.919 { 00:11:30.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.919 "dma_device_type": 2 00:11:30.919 } 00:11:30.919 ], 00:11:30.919 "driver_specific": { 00:11:30.919 "raid": { 00:11:30.919 "uuid": "fc39e921-b0b8-465b-b27e-a6d388ac7587", 00:11:30.919 "strip_size_kb": 64, 00:11:30.919 "state": "online", 00:11:30.919 "raid_level": "concat", 00:11:30.919 "superblock": false, 00:11:30.919 "num_base_bdevs": 3, 00:11:30.919 "num_base_bdevs_discovered": 3, 00:11:30.919 "num_base_bdevs_operational": 3, 00:11:30.919 "base_bdevs_list": [ 00:11:30.919 { 00:11:30.919 "name": "BaseBdev1", 00:11:30.919 "uuid": "bec44fca-c273-48bd-90bf-1c4f0d4f8d51", 00:11:30.919 "is_configured": true, 00:11:30.919 "data_offset": 0, 00:11:30.919 "data_size": 65536 00:11:30.919 }, 00:11:30.919 { 00:11:30.919 "name": "BaseBdev2", 00:11:30.919 "uuid": "a6d3d4af-b3a5-4499-9765-a2ba06089380", 00:11:30.919 "is_configured": true, 00:11:30.919 "data_offset": 0, 00:11:30.919 "data_size": 65536 00:11:30.919 }, 00:11:30.919 { 00:11:30.919 "name": "BaseBdev3", 00:11:30.919 "uuid": "cf79c632-58f9-45c8-af25-02833fcb7599", 00:11:30.919 "is_configured": true, 00:11:30.919 "data_offset": 0, 00:11:30.919 "data_size": 65536 00:11:30.919 } 00:11:30.919 ] 00:11:30.919 } 00:11:30.919 } 00:11:30.919 }' 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:30.919 BaseBdev2 00:11:30.919 BaseBdev3' 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.919 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.919 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.920 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.920 [2024-10-01 13:45:41.100281] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.920 [2024-10-01 13:45:41.100430] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.920 [2024-10-01 13:45:41.100580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.178 "name": "Existed_Raid", 00:11:31.178 "uuid": "fc39e921-b0b8-465b-b27e-a6d388ac7587", 00:11:31.178 "strip_size_kb": 64, 00:11:31.178 "state": "offline", 00:11:31.178 "raid_level": "concat", 00:11:31.178 "superblock": false, 00:11:31.178 "num_base_bdevs": 3, 00:11:31.178 "num_base_bdevs_discovered": 2, 00:11:31.178 "num_base_bdevs_operational": 2, 00:11:31.178 "base_bdevs_list": [ 00:11:31.178 { 00:11:31.178 "name": null, 00:11:31.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.178 "is_configured": false, 00:11:31.178 "data_offset": 0, 00:11:31.178 "data_size": 65536 00:11:31.178 }, 00:11:31.178 { 00:11:31.178 "name": "BaseBdev2", 00:11:31.178 "uuid": "a6d3d4af-b3a5-4499-9765-a2ba06089380", 00:11:31.178 "is_configured": true, 00:11:31.178 "data_offset": 0, 00:11:31.178 "data_size": 65536 00:11:31.178 }, 00:11:31.178 { 00:11:31.178 "name": "BaseBdev3", 00:11:31.178 "uuid": "cf79c632-58f9-45c8-af25-02833fcb7599", 00:11:31.178 "is_configured": true, 00:11:31.178 "data_offset": 0, 00:11:31.178 "data_size": 65536 00:11:31.178 } 00:11:31.178 ] 00:11:31.178 }' 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.178 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.745 [2024-10-01 13:45:41.683474] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.745 [2024-10-01 13:45:41.830055] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.745 [2024-10-01 13:45:41.830236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.745 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.004 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.004 BaseBdev2 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.004 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.004 [ 00:11:32.004 { 00:11:32.004 "name": "BaseBdev2", 00:11:32.004 "aliases": [ 00:11:32.004 "a503361e-0045-4e3c-958e-2c1075bab4ea" 00:11:32.004 ], 00:11:32.004 "product_name": "Malloc disk", 00:11:32.004 "block_size": 512, 00:11:32.004 "num_blocks": 65536, 00:11:32.004 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:32.004 "assigned_rate_limits": { 00:11:32.004 "rw_ios_per_sec": 0, 00:11:32.004 "rw_mbytes_per_sec": 0, 00:11:32.004 "r_mbytes_per_sec": 0, 00:11:32.004 "w_mbytes_per_sec": 0 00:11:32.004 }, 00:11:32.004 "claimed": false, 00:11:32.004 "zoned": false, 00:11:32.004 "supported_io_types": { 00:11:32.004 "read": true, 00:11:32.004 "write": true, 00:11:32.004 "unmap": true, 00:11:32.004 "flush": true, 00:11:32.004 "reset": true, 00:11:32.004 "nvme_admin": false, 00:11:32.004 "nvme_io": false, 00:11:32.004 "nvme_io_md": false, 00:11:32.004 "write_zeroes": true, 00:11:32.004 "zcopy": true, 00:11:32.004 "get_zone_info": false, 00:11:32.004 "zone_management": false, 00:11:32.004 "zone_append": false, 00:11:32.004 "compare": false, 00:11:32.004 "compare_and_write": false, 00:11:32.004 "abort": true, 00:11:32.004 "seek_hole": false, 00:11:32.004 "seek_data": false, 00:11:32.004 "copy": true, 00:11:32.004 "nvme_iov_md": false 00:11:32.004 }, 00:11:32.004 "memory_domains": [ 00:11:32.004 { 00:11:32.004 "dma_device_id": "system", 00:11:32.004 "dma_device_type": 1 00:11:32.005 }, 00:11:32.005 { 00:11:32.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.005 "dma_device_type": 2 00:11:32.005 } 00:11:32.005 ], 00:11:32.005 "driver_specific": {} 00:11:32.005 } 00:11:32.005 ] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.005 BaseBdev3 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.005 [ 00:11:32.005 { 00:11:32.005 "name": "BaseBdev3", 00:11:32.005 "aliases": [ 00:11:32.005 "7ce8d09c-fc27-459a-9607-b223524f385a" 00:11:32.005 ], 00:11:32.005 "product_name": "Malloc disk", 00:11:32.005 "block_size": 512, 00:11:32.005 "num_blocks": 65536, 00:11:32.005 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:32.005 "assigned_rate_limits": { 00:11:32.005 "rw_ios_per_sec": 0, 00:11:32.005 "rw_mbytes_per_sec": 0, 00:11:32.005 "r_mbytes_per_sec": 0, 00:11:32.005 "w_mbytes_per_sec": 0 00:11:32.005 }, 00:11:32.005 "claimed": false, 00:11:32.005 "zoned": false, 00:11:32.005 "supported_io_types": { 00:11:32.005 "read": true, 00:11:32.005 "write": true, 00:11:32.005 "unmap": true, 00:11:32.005 "flush": true, 00:11:32.005 "reset": true, 00:11:32.005 "nvme_admin": false, 00:11:32.005 "nvme_io": false, 00:11:32.005 "nvme_io_md": false, 00:11:32.005 "write_zeroes": true, 00:11:32.005 "zcopy": true, 00:11:32.005 "get_zone_info": false, 00:11:32.005 "zone_management": false, 00:11:32.005 "zone_append": false, 00:11:32.005 "compare": false, 00:11:32.005 "compare_and_write": false, 00:11:32.005 "abort": true, 00:11:32.005 "seek_hole": false, 00:11:32.005 "seek_data": false, 00:11:32.005 "copy": true, 00:11:32.005 "nvme_iov_md": false 00:11:32.005 }, 00:11:32.005 "memory_domains": [ 00:11:32.005 { 00:11:32.005 "dma_device_id": "system", 00:11:32.005 "dma_device_type": 1 00:11:32.005 }, 00:11:32.005 { 00:11:32.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.005 "dma_device_type": 2 00:11:32.005 } 00:11:32.005 ], 00:11:32.005 "driver_specific": {} 00:11:32.005 } 00:11:32.005 ] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.005 [2024-10-01 13:45:42.144267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.005 [2024-10-01 13:45:42.144460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.005 [2024-10-01 13:45:42.144555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.005 [2024-10-01 13:45:42.146577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.005 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.264 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.264 "name": "Existed_Raid", 00:11:32.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.264 "strip_size_kb": 64, 00:11:32.264 "state": "configuring", 00:11:32.264 "raid_level": "concat", 00:11:32.264 "superblock": false, 00:11:32.264 "num_base_bdevs": 3, 00:11:32.264 "num_base_bdevs_discovered": 2, 00:11:32.264 "num_base_bdevs_operational": 3, 00:11:32.264 "base_bdevs_list": [ 00:11:32.264 { 00:11:32.264 "name": "BaseBdev1", 00:11:32.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.264 "is_configured": false, 00:11:32.264 "data_offset": 0, 00:11:32.264 "data_size": 0 00:11:32.264 }, 00:11:32.264 { 00:11:32.264 "name": "BaseBdev2", 00:11:32.264 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:32.264 "is_configured": true, 00:11:32.264 "data_offset": 0, 00:11:32.264 "data_size": 65536 00:11:32.264 }, 00:11:32.264 { 00:11:32.264 "name": "BaseBdev3", 00:11:32.264 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:32.264 "is_configured": true, 00:11:32.264 "data_offset": 0, 00:11:32.264 "data_size": 65536 00:11:32.264 } 00:11:32.264 ] 00:11:32.264 }' 00:11:32.264 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.264 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.573 [2024-10-01 13:45:42.575646] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.573 "name": "Existed_Raid", 00:11:32.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.573 "strip_size_kb": 64, 00:11:32.573 "state": "configuring", 00:11:32.573 "raid_level": "concat", 00:11:32.573 "superblock": false, 00:11:32.573 "num_base_bdevs": 3, 00:11:32.573 "num_base_bdevs_discovered": 1, 00:11:32.573 "num_base_bdevs_operational": 3, 00:11:32.573 "base_bdevs_list": [ 00:11:32.573 { 00:11:32.573 "name": "BaseBdev1", 00:11:32.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.573 "is_configured": false, 00:11:32.573 "data_offset": 0, 00:11:32.573 "data_size": 0 00:11:32.573 }, 00:11:32.573 { 00:11:32.573 "name": null, 00:11:32.573 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:32.573 "is_configured": false, 00:11:32.573 "data_offset": 0, 00:11:32.573 "data_size": 65536 00:11:32.573 }, 00:11:32.573 { 00:11:32.573 "name": "BaseBdev3", 00:11:32.573 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:32.573 "is_configured": true, 00:11:32.573 "data_offset": 0, 00:11:32.573 "data_size": 65536 00:11:32.573 } 00:11:32.573 ] 00:11:32.573 }' 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.573 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 [2024-10-01 13:45:43.105573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.141 BaseBdev1 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 [ 00:11:33.141 { 00:11:33.141 "name": "BaseBdev1", 00:11:33.141 "aliases": [ 00:11:33.141 "4b5e2d58-72bf-47db-998d-71be7bc62fac" 00:11:33.141 ], 00:11:33.141 "product_name": "Malloc disk", 00:11:33.141 "block_size": 512, 00:11:33.141 "num_blocks": 65536, 00:11:33.141 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:33.141 "assigned_rate_limits": { 00:11:33.141 "rw_ios_per_sec": 0, 00:11:33.141 "rw_mbytes_per_sec": 0, 00:11:33.141 "r_mbytes_per_sec": 0, 00:11:33.141 "w_mbytes_per_sec": 0 00:11:33.141 }, 00:11:33.141 "claimed": true, 00:11:33.141 "claim_type": "exclusive_write", 00:11:33.141 "zoned": false, 00:11:33.141 "supported_io_types": { 00:11:33.141 "read": true, 00:11:33.141 "write": true, 00:11:33.141 "unmap": true, 00:11:33.141 "flush": true, 00:11:33.141 "reset": true, 00:11:33.141 "nvme_admin": false, 00:11:33.141 "nvme_io": false, 00:11:33.141 "nvme_io_md": false, 00:11:33.141 "write_zeroes": true, 00:11:33.141 "zcopy": true, 00:11:33.141 "get_zone_info": false, 00:11:33.141 "zone_management": false, 00:11:33.141 "zone_append": false, 00:11:33.141 "compare": false, 00:11:33.141 "compare_and_write": false, 00:11:33.141 "abort": true, 00:11:33.141 "seek_hole": false, 00:11:33.141 "seek_data": false, 00:11:33.141 "copy": true, 00:11:33.141 "nvme_iov_md": false 00:11:33.141 }, 00:11:33.141 "memory_domains": [ 00:11:33.141 { 00:11:33.141 "dma_device_id": "system", 00:11:33.141 "dma_device_type": 1 00:11:33.141 }, 00:11:33.141 { 00:11:33.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.141 "dma_device_type": 2 00:11:33.141 } 00:11:33.141 ], 00:11:33.141 "driver_specific": {} 00:11:33.141 } 00:11:33.141 ] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.141 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.141 "name": "Existed_Raid", 00:11:33.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.141 "strip_size_kb": 64, 00:11:33.141 "state": "configuring", 00:11:33.141 "raid_level": "concat", 00:11:33.141 "superblock": false, 00:11:33.141 "num_base_bdevs": 3, 00:11:33.141 "num_base_bdevs_discovered": 2, 00:11:33.141 "num_base_bdevs_operational": 3, 00:11:33.141 "base_bdevs_list": [ 00:11:33.141 { 00:11:33.142 "name": "BaseBdev1", 00:11:33.142 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:33.142 "is_configured": true, 00:11:33.142 "data_offset": 0, 00:11:33.142 "data_size": 65536 00:11:33.142 }, 00:11:33.142 { 00:11:33.142 "name": null, 00:11:33.142 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:33.142 "is_configured": false, 00:11:33.142 "data_offset": 0, 00:11:33.142 "data_size": 65536 00:11:33.142 }, 00:11:33.142 { 00:11:33.142 "name": "BaseBdev3", 00:11:33.142 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:33.142 "is_configured": true, 00:11:33.142 "data_offset": 0, 00:11:33.142 "data_size": 65536 00:11:33.142 } 00:11:33.142 ] 00:11:33.142 }' 00:11:33.142 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.142 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.400 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.400 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.400 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.400 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.400 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.660 [2024-10-01 13:45:43.601024] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.660 "name": "Existed_Raid", 00:11:33.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.660 "strip_size_kb": 64, 00:11:33.660 "state": "configuring", 00:11:33.660 "raid_level": "concat", 00:11:33.660 "superblock": false, 00:11:33.660 "num_base_bdevs": 3, 00:11:33.660 "num_base_bdevs_discovered": 1, 00:11:33.660 "num_base_bdevs_operational": 3, 00:11:33.660 "base_bdevs_list": [ 00:11:33.660 { 00:11:33.660 "name": "BaseBdev1", 00:11:33.660 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:33.660 "is_configured": true, 00:11:33.660 "data_offset": 0, 00:11:33.660 "data_size": 65536 00:11:33.660 }, 00:11:33.660 { 00:11:33.660 "name": null, 00:11:33.660 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:33.660 "is_configured": false, 00:11:33.660 "data_offset": 0, 00:11:33.660 "data_size": 65536 00:11:33.660 }, 00:11:33.660 { 00:11:33.660 "name": null, 00:11:33.660 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:33.660 "is_configured": false, 00:11:33.660 "data_offset": 0, 00:11:33.660 "data_size": 65536 00:11:33.660 } 00:11:33.660 ] 00:11:33.660 }' 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.660 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.919 [2024-10-01 13:45:44.060352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.919 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.920 "name": "Existed_Raid", 00:11:33.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.920 "strip_size_kb": 64, 00:11:33.920 "state": "configuring", 00:11:33.920 "raid_level": "concat", 00:11:33.920 "superblock": false, 00:11:33.920 "num_base_bdevs": 3, 00:11:33.920 "num_base_bdevs_discovered": 2, 00:11:33.920 "num_base_bdevs_operational": 3, 00:11:33.920 "base_bdevs_list": [ 00:11:33.920 { 00:11:33.920 "name": "BaseBdev1", 00:11:33.920 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:33.920 "is_configured": true, 00:11:33.920 "data_offset": 0, 00:11:33.920 "data_size": 65536 00:11:33.920 }, 00:11:33.920 { 00:11:33.920 "name": null, 00:11:33.920 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:33.920 "is_configured": false, 00:11:33.920 "data_offset": 0, 00:11:33.920 "data_size": 65536 00:11:33.920 }, 00:11:33.920 { 00:11:33.920 "name": "BaseBdev3", 00:11:33.920 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:33.920 "is_configured": true, 00:11:33.920 "data_offset": 0, 00:11:33.920 "data_size": 65536 00:11:33.920 } 00:11:33.920 ] 00:11:33.920 }' 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.920 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.490 [2024-10-01 13:45:44.543705] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.490 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.491 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.750 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.750 "name": "Existed_Raid", 00:11:34.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.750 "strip_size_kb": 64, 00:11:34.750 "state": "configuring", 00:11:34.750 "raid_level": "concat", 00:11:34.750 "superblock": false, 00:11:34.750 "num_base_bdevs": 3, 00:11:34.750 "num_base_bdevs_discovered": 1, 00:11:34.750 "num_base_bdevs_operational": 3, 00:11:34.750 "base_bdevs_list": [ 00:11:34.750 { 00:11:34.750 "name": null, 00:11:34.750 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:34.750 "is_configured": false, 00:11:34.750 "data_offset": 0, 00:11:34.750 "data_size": 65536 00:11:34.750 }, 00:11:34.750 { 00:11:34.750 "name": null, 00:11:34.750 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:34.750 "is_configured": false, 00:11:34.750 "data_offset": 0, 00:11:34.750 "data_size": 65536 00:11:34.750 }, 00:11:34.750 { 00:11:34.750 "name": "BaseBdev3", 00:11:34.750 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:34.750 "is_configured": true, 00:11:34.750 "data_offset": 0, 00:11:34.750 "data_size": 65536 00:11:34.750 } 00:11:34.750 ] 00:11:34.750 }' 00:11:34.750 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.750 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.009 [2024-10-01 13:45:45.147459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.009 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.269 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.269 "name": "Existed_Raid", 00:11:35.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.269 "strip_size_kb": 64, 00:11:35.269 "state": "configuring", 00:11:35.269 "raid_level": "concat", 00:11:35.269 "superblock": false, 00:11:35.269 "num_base_bdevs": 3, 00:11:35.269 "num_base_bdevs_discovered": 2, 00:11:35.269 "num_base_bdevs_operational": 3, 00:11:35.269 "base_bdevs_list": [ 00:11:35.269 { 00:11:35.269 "name": null, 00:11:35.269 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:35.269 "is_configured": false, 00:11:35.269 "data_offset": 0, 00:11:35.269 "data_size": 65536 00:11:35.269 }, 00:11:35.269 { 00:11:35.269 "name": "BaseBdev2", 00:11:35.269 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:35.269 "is_configured": true, 00:11:35.269 "data_offset": 0, 00:11:35.269 "data_size": 65536 00:11:35.269 }, 00:11:35.269 { 00:11:35.269 "name": "BaseBdev3", 00:11:35.269 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:35.269 "is_configured": true, 00:11:35.269 "data_offset": 0, 00:11:35.269 "data_size": 65536 00:11:35.269 } 00:11:35.269 ] 00:11:35.269 }' 00:11:35.269 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.269 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b5e2d58-72bf-47db-998d-71be7bc62fac 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.528 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.787 [2024-10-01 13:45:45.753060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.788 NewBaseBdev 00:11:35.788 [2024-10-01 13:45:45.753271] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:35.788 [2024-10-01 13:45:45.753295] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:35.788 [2024-10-01 13:45:45.753601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:35.788 [2024-10-01 13:45:45.753754] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:35.788 [2024-10-01 13:45:45.753764] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:35.788 [2024-10-01 13:45:45.754016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 [ 00:11:35.788 { 00:11:35.788 "name": "NewBaseBdev", 00:11:35.788 "aliases": [ 00:11:35.788 "4b5e2d58-72bf-47db-998d-71be7bc62fac" 00:11:35.788 ], 00:11:35.788 "product_name": "Malloc disk", 00:11:35.788 "block_size": 512, 00:11:35.788 "num_blocks": 65536, 00:11:35.788 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:35.788 "assigned_rate_limits": { 00:11:35.788 "rw_ios_per_sec": 0, 00:11:35.788 "rw_mbytes_per_sec": 0, 00:11:35.788 "r_mbytes_per_sec": 0, 00:11:35.788 "w_mbytes_per_sec": 0 00:11:35.788 }, 00:11:35.788 "claimed": true, 00:11:35.788 "claim_type": "exclusive_write", 00:11:35.788 "zoned": false, 00:11:35.788 "supported_io_types": { 00:11:35.788 "read": true, 00:11:35.788 "write": true, 00:11:35.788 "unmap": true, 00:11:35.788 "flush": true, 00:11:35.788 "reset": true, 00:11:35.788 "nvme_admin": false, 00:11:35.788 "nvme_io": false, 00:11:35.788 "nvme_io_md": false, 00:11:35.788 "write_zeroes": true, 00:11:35.788 "zcopy": true, 00:11:35.788 "get_zone_info": false, 00:11:35.788 "zone_management": false, 00:11:35.788 "zone_append": false, 00:11:35.788 "compare": false, 00:11:35.788 "compare_and_write": false, 00:11:35.788 "abort": true, 00:11:35.788 "seek_hole": false, 00:11:35.788 "seek_data": false, 00:11:35.788 "copy": true, 00:11:35.788 "nvme_iov_md": false 00:11:35.788 }, 00:11:35.788 "memory_domains": [ 00:11:35.788 { 00:11:35.788 "dma_device_id": "system", 00:11:35.788 "dma_device_type": 1 00:11:35.788 }, 00:11:35.788 { 00:11:35.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.788 "dma_device_type": 2 00:11:35.788 } 00:11:35.788 ], 00:11:35.788 "driver_specific": {} 00:11:35.788 } 00:11:35.788 ] 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.788 "name": "Existed_Raid", 00:11:35.788 "uuid": "0dbe4b70-6bfd-4cde-8b7d-4c2b5f670915", 00:11:35.788 "strip_size_kb": 64, 00:11:35.788 "state": "online", 00:11:35.788 "raid_level": "concat", 00:11:35.788 "superblock": false, 00:11:35.788 "num_base_bdevs": 3, 00:11:35.788 "num_base_bdevs_discovered": 3, 00:11:35.788 "num_base_bdevs_operational": 3, 00:11:35.788 "base_bdevs_list": [ 00:11:35.788 { 00:11:35.788 "name": "NewBaseBdev", 00:11:35.788 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:35.788 "is_configured": true, 00:11:35.788 "data_offset": 0, 00:11:35.788 "data_size": 65536 00:11:35.788 }, 00:11:35.788 { 00:11:35.788 "name": "BaseBdev2", 00:11:35.788 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:35.788 "is_configured": true, 00:11:35.788 "data_offset": 0, 00:11:35.788 "data_size": 65536 00:11:35.788 }, 00:11:35.788 { 00:11:35.788 "name": "BaseBdev3", 00:11:35.788 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:35.788 "is_configured": true, 00:11:35.788 "data_offset": 0, 00:11:35.788 "data_size": 65536 00:11:35.788 } 00:11:35.788 ] 00:11:35.788 }' 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.788 13:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.072 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.072 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.072 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.073 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.073 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.073 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.342 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.342 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.342 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.342 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.342 [2024-10-01 13:45:46.268731] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.342 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.343 "name": "Existed_Raid", 00:11:36.343 "aliases": [ 00:11:36.343 "0dbe4b70-6bfd-4cde-8b7d-4c2b5f670915" 00:11:36.343 ], 00:11:36.343 "product_name": "Raid Volume", 00:11:36.343 "block_size": 512, 00:11:36.343 "num_blocks": 196608, 00:11:36.343 "uuid": "0dbe4b70-6bfd-4cde-8b7d-4c2b5f670915", 00:11:36.343 "assigned_rate_limits": { 00:11:36.343 "rw_ios_per_sec": 0, 00:11:36.343 "rw_mbytes_per_sec": 0, 00:11:36.343 "r_mbytes_per_sec": 0, 00:11:36.343 "w_mbytes_per_sec": 0 00:11:36.343 }, 00:11:36.343 "claimed": false, 00:11:36.343 "zoned": false, 00:11:36.343 "supported_io_types": { 00:11:36.343 "read": true, 00:11:36.343 "write": true, 00:11:36.343 "unmap": true, 00:11:36.343 "flush": true, 00:11:36.343 "reset": true, 00:11:36.343 "nvme_admin": false, 00:11:36.343 "nvme_io": false, 00:11:36.343 "nvme_io_md": false, 00:11:36.343 "write_zeroes": true, 00:11:36.343 "zcopy": false, 00:11:36.343 "get_zone_info": false, 00:11:36.343 "zone_management": false, 00:11:36.343 "zone_append": false, 00:11:36.343 "compare": false, 00:11:36.343 "compare_and_write": false, 00:11:36.343 "abort": false, 00:11:36.343 "seek_hole": false, 00:11:36.343 "seek_data": false, 00:11:36.343 "copy": false, 00:11:36.343 "nvme_iov_md": false 00:11:36.343 }, 00:11:36.343 "memory_domains": [ 00:11:36.343 { 00:11:36.343 "dma_device_id": "system", 00:11:36.343 "dma_device_type": 1 00:11:36.343 }, 00:11:36.343 { 00:11:36.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.343 "dma_device_type": 2 00:11:36.343 }, 00:11:36.343 { 00:11:36.343 "dma_device_id": "system", 00:11:36.343 "dma_device_type": 1 00:11:36.343 }, 00:11:36.343 { 00:11:36.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.343 "dma_device_type": 2 00:11:36.343 }, 00:11:36.343 { 00:11:36.343 "dma_device_id": "system", 00:11:36.343 "dma_device_type": 1 00:11:36.343 }, 00:11:36.343 { 00:11:36.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.343 "dma_device_type": 2 00:11:36.343 } 00:11:36.343 ], 00:11:36.343 "driver_specific": { 00:11:36.343 "raid": { 00:11:36.343 "uuid": "0dbe4b70-6bfd-4cde-8b7d-4c2b5f670915", 00:11:36.343 "strip_size_kb": 64, 00:11:36.343 "state": "online", 00:11:36.343 "raid_level": "concat", 00:11:36.343 "superblock": false, 00:11:36.343 "num_base_bdevs": 3, 00:11:36.343 "num_base_bdevs_discovered": 3, 00:11:36.343 "num_base_bdevs_operational": 3, 00:11:36.343 "base_bdevs_list": [ 00:11:36.343 { 00:11:36.343 "name": "NewBaseBdev", 00:11:36.343 "uuid": "4b5e2d58-72bf-47db-998d-71be7bc62fac", 00:11:36.343 "is_configured": true, 00:11:36.343 "data_offset": 0, 00:11:36.343 "data_size": 65536 00:11:36.343 }, 00:11:36.343 { 00:11:36.343 "name": "BaseBdev2", 00:11:36.343 "uuid": "a503361e-0045-4e3c-958e-2c1075bab4ea", 00:11:36.343 "is_configured": true, 00:11:36.343 "data_offset": 0, 00:11:36.343 "data_size": 65536 00:11:36.343 }, 00:11:36.343 { 00:11:36.343 "name": "BaseBdev3", 00:11:36.343 "uuid": "7ce8d09c-fc27-459a-9607-b223524f385a", 00:11:36.343 "is_configured": true, 00:11:36.343 "data_offset": 0, 00:11:36.343 "data_size": 65536 00:11:36.343 } 00:11:36.343 ] 00:11:36.343 } 00:11:36.343 } 00:11:36.343 }' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:36.343 BaseBdev2 00:11:36.343 BaseBdev3' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.343 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 [2024-10-01 13:45:46.532004] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.343 [2024-10-01 13:45:46.532141] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.343 [2024-10-01 13:45:46.532340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.602 [2024-10-01 13:45:46.532527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.602 [2024-10-01 13:45:46.532639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65502 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65502 ']' 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65502 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65502 00:11:36.602 killing process with pid 65502 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65502' 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65502 00:11:36.602 [2024-10-01 13:45:46.582066] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.602 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65502 00:11:36.860 [2024-10-01 13:45:46.885047] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.236 13:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:38.236 00:11:38.236 real 0m10.793s 00:11:38.236 user 0m17.093s 00:11:38.236 sys 0m2.107s 00:11:38.236 13:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.236 13:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.236 ************************************ 00:11:38.236 END TEST raid_state_function_test 00:11:38.236 ************************************ 00:11:38.236 13:45:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:38.236 13:45:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:38.236 13:45:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.236 13:45:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.236 ************************************ 00:11:38.236 START TEST raid_state_function_test_sb 00:11:38.236 ************************************ 00:11:38.236 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:11:38.236 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:38.237 Process raid pid: 66130 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66130 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66130' 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66130 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66130 ']' 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.237 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.237 [2024-10-01 13:45:48.372667] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:38.237 [2024-10-01 13:45:48.372845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.496 [2024-10-01 13:45:48.549752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.755 [2024-10-01 13:45:48.775651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.013 [2024-10-01 13:45:48.985809] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.013 [2024-10-01 13:45:48.985846] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.013 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.013 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:39.013 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:39.013 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.013 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.013 [2024-10-01 13:45:49.201464] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.013 [2024-10-01 13:45:49.201656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.013 [2024-10-01 13:45:49.201759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.013 [2024-10-01 13:45:49.201805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.013 [2024-10-01 13:45:49.201886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:39.013 [2024-10-01 13:45:49.201933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.272 "name": "Existed_Raid", 00:11:39.272 "uuid": "062c5e00-ffd8-415f-8e9d-dcb48d04cf65", 00:11:39.272 "strip_size_kb": 64, 00:11:39.272 "state": "configuring", 00:11:39.272 "raid_level": "concat", 00:11:39.272 "superblock": true, 00:11:39.272 "num_base_bdevs": 3, 00:11:39.272 "num_base_bdevs_discovered": 0, 00:11:39.272 "num_base_bdevs_operational": 3, 00:11:39.272 "base_bdevs_list": [ 00:11:39.272 { 00:11:39.272 "name": "BaseBdev1", 00:11:39.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.272 "is_configured": false, 00:11:39.272 "data_offset": 0, 00:11:39.272 "data_size": 0 00:11:39.272 }, 00:11:39.272 { 00:11:39.272 "name": "BaseBdev2", 00:11:39.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.272 "is_configured": false, 00:11:39.272 "data_offset": 0, 00:11:39.272 "data_size": 0 00:11:39.272 }, 00:11:39.272 { 00:11:39.272 "name": "BaseBdev3", 00:11:39.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.272 "is_configured": false, 00:11:39.272 "data_offset": 0, 00:11:39.272 "data_size": 0 00:11:39.272 } 00:11:39.272 ] 00:11:39.272 }' 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.272 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.531 [2024-10-01 13:45:49.664682] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:39.531 [2024-10-01 13:45:49.664847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.531 [2024-10-01 13:45:49.676694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.531 [2024-10-01 13:45:49.676872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.531 [2024-10-01 13:45:49.677027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.531 [2024-10-01 13:45:49.677074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.531 [2024-10-01 13:45:49.677145] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:39.531 [2024-10-01 13:45:49.677186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.531 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.791 [2024-10-01 13:45:49.740990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.791 BaseBdev1 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.791 [ 00:11:39.791 { 00:11:39.791 "name": "BaseBdev1", 00:11:39.791 "aliases": [ 00:11:39.791 "a0a062b7-630c-447c-a910-4f6e8d79b6ad" 00:11:39.791 ], 00:11:39.791 "product_name": "Malloc disk", 00:11:39.791 "block_size": 512, 00:11:39.791 "num_blocks": 65536, 00:11:39.791 "uuid": "a0a062b7-630c-447c-a910-4f6e8d79b6ad", 00:11:39.791 "assigned_rate_limits": { 00:11:39.791 "rw_ios_per_sec": 0, 00:11:39.791 "rw_mbytes_per_sec": 0, 00:11:39.791 "r_mbytes_per_sec": 0, 00:11:39.791 "w_mbytes_per_sec": 0 00:11:39.791 }, 00:11:39.791 "claimed": true, 00:11:39.791 "claim_type": "exclusive_write", 00:11:39.791 "zoned": false, 00:11:39.791 "supported_io_types": { 00:11:39.791 "read": true, 00:11:39.791 "write": true, 00:11:39.791 "unmap": true, 00:11:39.791 "flush": true, 00:11:39.791 "reset": true, 00:11:39.791 "nvme_admin": false, 00:11:39.791 "nvme_io": false, 00:11:39.791 "nvme_io_md": false, 00:11:39.791 "write_zeroes": true, 00:11:39.791 "zcopy": true, 00:11:39.791 "get_zone_info": false, 00:11:39.791 "zone_management": false, 00:11:39.791 "zone_append": false, 00:11:39.791 "compare": false, 00:11:39.791 "compare_and_write": false, 00:11:39.791 "abort": true, 00:11:39.791 "seek_hole": false, 00:11:39.791 "seek_data": false, 00:11:39.791 "copy": true, 00:11:39.791 "nvme_iov_md": false 00:11:39.791 }, 00:11:39.791 "memory_domains": [ 00:11:39.791 { 00:11:39.791 "dma_device_id": "system", 00:11:39.791 "dma_device_type": 1 00:11:39.791 }, 00:11:39.791 { 00:11:39.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.791 "dma_device_type": 2 00:11:39.791 } 00:11:39.791 ], 00:11:39.791 "driver_specific": {} 00:11:39.791 } 00:11:39.791 ] 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.791 "name": "Existed_Raid", 00:11:39.791 "uuid": "493a8458-56ab-48f9-96e2-f6d5fb8fba77", 00:11:39.791 "strip_size_kb": 64, 00:11:39.791 "state": "configuring", 00:11:39.791 "raid_level": "concat", 00:11:39.791 "superblock": true, 00:11:39.791 "num_base_bdevs": 3, 00:11:39.791 "num_base_bdevs_discovered": 1, 00:11:39.791 "num_base_bdevs_operational": 3, 00:11:39.791 "base_bdevs_list": [ 00:11:39.791 { 00:11:39.791 "name": "BaseBdev1", 00:11:39.791 "uuid": "a0a062b7-630c-447c-a910-4f6e8d79b6ad", 00:11:39.791 "is_configured": true, 00:11:39.791 "data_offset": 2048, 00:11:39.791 "data_size": 63488 00:11:39.791 }, 00:11:39.791 { 00:11:39.791 "name": "BaseBdev2", 00:11:39.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.791 "is_configured": false, 00:11:39.791 "data_offset": 0, 00:11:39.791 "data_size": 0 00:11:39.791 }, 00:11:39.791 { 00:11:39.791 "name": "BaseBdev3", 00:11:39.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.791 "is_configured": false, 00:11:39.791 "data_offset": 0, 00:11:39.791 "data_size": 0 00:11:39.791 } 00:11:39.791 ] 00:11:39.791 }' 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.791 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.050 [2024-10-01 13:45:50.208425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.050 [2024-10-01 13:45:50.208483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.050 [2024-10-01 13:45:50.216484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.050 [2024-10-01 13:45:50.218723] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.050 [2024-10-01 13:45:50.218783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.050 [2024-10-01 13:45:50.218795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.050 [2024-10-01 13:45:50.218823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.050 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.051 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.051 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.051 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.051 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.051 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.051 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.309 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.309 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.309 "name": "Existed_Raid", 00:11:40.309 "uuid": "da41aa2b-0904-4106-976f-e39da50b4a23", 00:11:40.309 "strip_size_kb": 64, 00:11:40.310 "state": "configuring", 00:11:40.310 "raid_level": "concat", 00:11:40.310 "superblock": true, 00:11:40.310 "num_base_bdevs": 3, 00:11:40.310 "num_base_bdevs_discovered": 1, 00:11:40.310 "num_base_bdevs_operational": 3, 00:11:40.310 "base_bdevs_list": [ 00:11:40.310 { 00:11:40.310 "name": "BaseBdev1", 00:11:40.310 "uuid": "a0a062b7-630c-447c-a910-4f6e8d79b6ad", 00:11:40.310 "is_configured": true, 00:11:40.310 "data_offset": 2048, 00:11:40.310 "data_size": 63488 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "name": "BaseBdev2", 00:11:40.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.310 "is_configured": false, 00:11:40.310 "data_offset": 0, 00:11:40.310 "data_size": 0 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "name": "BaseBdev3", 00:11:40.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.310 "is_configured": false, 00:11:40.310 "data_offset": 0, 00:11:40.310 "data_size": 0 00:11:40.310 } 00:11:40.310 ] 00:11:40.310 }' 00:11:40.310 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.310 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 [2024-10-01 13:45:50.728064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.569 BaseBdev2 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.569 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.827 [ 00:11:40.827 { 00:11:40.827 "name": "BaseBdev2", 00:11:40.827 "aliases": [ 00:11:40.827 "c3f22e3c-e05f-4652-8187-4e85151f6554" 00:11:40.827 ], 00:11:40.828 "product_name": "Malloc disk", 00:11:40.828 "block_size": 512, 00:11:40.828 "num_blocks": 65536, 00:11:40.828 "uuid": "c3f22e3c-e05f-4652-8187-4e85151f6554", 00:11:40.828 "assigned_rate_limits": { 00:11:40.828 "rw_ios_per_sec": 0, 00:11:40.828 "rw_mbytes_per_sec": 0, 00:11:40.828 "r_mbytes_per_sec": 0, 00:11:40.828 "w_mbytes_per_sec": 0 00:11:40.828 }, 00:11:40.828 "claimed": true, 00:11:40.828 "claim_type": "exclusive_write", 00:11:40.828 "zoned": false, 00:11:40.828 "supported_io_types": { 00:11:40.828 "read": true, 00:11:40.828 "write": true, 00:11:40.828 "unmap": true, 00:11:40.828 "flush": true, 00:11:40.828 "reset": true, 00:11:40.828 "nvme_admin": false, 00:11:40.828 "nvme_io": false, 00:11:40.828 "nvme_io_md": false, 00:11:40.828 "write_zeroes": true, 00:11:40.828 "zcopy": true, 00:11:40.828 "get_zone_info": false, 00:11:40.828 "zone_management": false, 00:11:40.828 "zone_append": false, 00:11:40.828 "compare": false, 00:11:40.828 "compare_and_write": false, 00:11:40.828 "abort": true, 00:11:40.828 "seek_hole": false, 00:11:40.828 "seek_data": false, 00:11:40.828 "copy": true, 00:11:40.828 "nvme_iov_md": false 00:11:40.828 }, 00:11:40.828 "memory_domains": [ 00:11:40.828 { 00:11:40.828 "dma_device_id": "system", 00:11:40.828 "dma_device_type": 1 00:11:40.828 }, 00:11:40.828 { 00:11:40.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.828 "dma_device_type": 2 00:11:40.828 } 00:11:40.828 ], 00:11:40.828 "driver_specific": {} 00:11:40.828 } 00:11:40.828 ] 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.828 "name": "Existed_Raid", 00:11:40.828 "uuid": "da41aa2b-0904-4106-976f-e39da50b4a23", 00:11:40.828 "strip_size_kb": 64, 00:11:40.828 "state": "configuring", 00:11:40.828 "raid_level": "concat", 00:11:40.828 "superblock": true, 00:11:40.828 "num_base_bdevs": 3, 00:11:40.828 "num_base_bdevs_discovered": 2, 00:11:40.828 "num_base_bdevs_operational": 3, 00:11:40.828 "base_bdevs_list": [ 00:11:40.828 { 00:11:40.828 "name": "BaseBdev1", 00:11:40.828 "uuid": "a0a062b7-630c-447c-a910-4f6e8d79b6ad", 00:11:40.828 "is_configured": true, 00:11:40.828 "data_offset": 2048, 00:11:40.828 "data_size": 63488 00:11:40.828 }, 00:11:40.828 { 00:11:40.828 "name": "BaseBdev2", 00:11:40.828 "uuid": "c3f22e3c-e05f-4652-8187-4e85151f6554", 00:11:40.828 "is_configured": true, 00:11:40.828 "data_offset": 2048, 00:11:40.828 "data_size": 63488 00:11:40.828 }, 00:11:40.828 { 00:11:40.828 "name": "BaseBdev3", 00:11:40.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.828 "is_configured": false, 00:11:40.828 "data_offset": 0, 00:11:40.828 "data_size": 0 00:11:40.828 } 00:11:40.828 ] 00:11:40.828 }' 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.828 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.087 [2024-10-01 13:45:51.246665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.087 [2024-10-01 13:45:51.247174] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.087 [2024-10-01 13:45:51.247204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:41.087 [2024-10-01 13:45:51.247543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:41.087 BaseBdev3 00:11:41.087 [2024-10-01 13:45:51.247698] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.087 [2024-10-01 13:45:51.247710] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:41.087 [2024-10-01 13:45:51.247983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.087 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.087 [ 00:11:41.087 { 00:11:41.087 "name": "BaseBdev3", 00:11:41.087 "aliases": [ 00:11:41.087 "9be27ffe-f494-4507-88e9-9d1a3b87da39" 00:11:41.087 ], 00:11:41.087 "product_name": "Malloc disk", 00:11:41.087 "block_size": 512, 00:11:41.346 "num_blocks": 65536, 00:11:41.346 "uuid": "9be27ffe-f494-4507-88e9-9d1a3b87da39", 00:11:41.346 "assigned_rate_limits": { 00:11:41.346 "rw_ios_per_sec": 0, 00:11:41.346 "rw_mbytes_per_sec": 0, 00:11:41.346 "r_mbytes_per_sec": 0, 00:11:41.346 "w_mbytes_per_sec": 0 00:11:41.346 }, 00:11:41.346 "claimed": true, 00:11:41.346 "claim_type": "exclusive_write", 00:11:41.346 "zoned": false, 00:11:41.346 "supported_io_types": { 00:11:41.346 "read": true, 00:11:41.346 "write": true, 00:11:41.346 "unmap": true, 00:11:41.346 "flush": true, 00:11:41.346 "reset": true, 00:11:41.346 "nvme_admin": false, 00:11:41.346 "nvme_io": false, 00:11:41.346 "nvme_io_md": false, 00:11:41.346 "write_zeroes": true, 00:11:41.346 "zcopy": true, 00:11:41.346 "get_zone_info": false, 00:11:41.346 "zone_management": false, 00:11:41.346 "zone_append": false, 00:11:41.346 "compare": false, 00:11:41.346 "compare_and_write": false, 00:11:41.346 "abort": true, 00:11:41.346 "seek_hole": false, 00:11:41.346 "seek_data": false, 00:11:41.346 "copy": true, 00:11:41.346 "nvme_iov_md": false 00:11:41.346 }, 00:11:41.346 "memory_domains": [ 00:11:41.346 { 00:11:41.346 "dma_device_id": "system", 00:11:41.346 "dma_device_type": 1 00:11:41.346 }, 00:11:41.346 { 00:11:41.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.346 "dma_device_type": 2 00:11:41.346 } 00:11:41.346 ], 00:11:41.346 "driver_specific": {} 00:11:41.346 } 00:11:41.346 ] 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.346 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.346 "name": "Existed_Raid", 00:11:41.346 "uuid": "da41aa2b-0904-4106-976f-e39da50b4a23", 00:11:41.346 "strip_size_kb": 64, 00:11:41.346 "state": "online", 00:11:41.346 "raid_level": "concat", 00:11:41.346 "superblock": true, 00:11:41.346 "num_base_bdevs": 3, 00:11:41.346 "num_base_bdevs_discovered": 3, 00:11:41.346 "num_base_bdevs_operational": 3, 00:11:41.346 "base_bdevs_list": [ 00:11:41.346 { 00:11:41.346 "name": "BaseBdev1", 00:11:41.346 "uuid": "a0a062b7-630c-447c-a910-4f6e8d79b6ad", 00:11:41.346 "is_configured": true, 00:11:41.346 "data_offset": 2048, 00:11:41.346 "data_size": 63488 00:11:41.346 }, 00:11:41.346 { 00:11:41.346 "name": "BaseBdev2", 00:11:41.347 "uuid": "c3f22e3c-e05f-4652-8187-4e85151f6554", 00:11:41.347 "is_configured": true, 00:11:41.347 "data_offset": 2048, 00:11:41.347 "data_size": 63488 00:11:41.347 }, 00:11:41.347 { 00:11:41.347 "name": "BaseBdev3", 00:11:41.347 "uuid": "9be27ffe-f494-4507-88e9-9d1a3b87da39", 00:11:41.347 "is_configured": true, 00:11:41.347 "data_offset": 2048, 00:11:41.347 "data_size": 63488 00:11:41.347 } 00:11:41.347 ] 00:11:41.347 }' 00:11:41.347 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.347 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.607 [2024-10-01 13:45:51.754688] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.607 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.889 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.890 "name": "Existed_Raid", 00:11:41.890 "aliases": [ 00:11:41.890 "da41aa2b-0904-4106-976f-e39da50b4a23" 00:11:41.890 ], 00:11:41.890 "product_name": "Raid Volume", 00:11:41.890 "block_size": 512, 00:11:41.890 "num_blocks": 190464, 00:11:41.890 "uuid": "da41aa2b-0904-4106-976f-e39da50b4a23", 00:11:41.890 "assigned_rate_limits": { 00:11:41.890 "rw_ios_per_sec": 0, 00:11:41.890 "rw_mbytes_per_sec": 0, 00:11:41.890 "r_mbytes_per_sec": 0, 00:11:41.890 "w_mbytes_per_sec": 0 00:11:41.890 }, 00:11:41.890 "claimed": false, 00:11:41.890 "zoned": false, 00:11:41.890 "supported_io_types": { 00:11:41.890 "read": true, 00:11:41.890 "write": true, 00:11:41.890 "unmap": true, 00:11:41.890 "flush": true, 00:11:41.890 "reset": true, 00:11:41.890 "nvme_admin": false, 00:11:41.890 "nvme_io": false, 00:11:41.890 "nvme_io_md": false, 00:11:41.890 "write_zeroes": true, 00:11:41.890 "zcopy": false, 00:11:41.890 "get_zone_info": false, 00:11:41.890 "zone_management": false, 00:11:41.890 "zone_append": false, 00:11:41.890 "compare": false, 00:11:41.890 "compare_and_write": false, 00:11:41.890 "abort": false, 00:11:41.890 "seek_hole": false, 00:11:41.890 "seek_data": false, 00:11:41.890 "copy": false, 00:11:41.890 "nvme_iov_md": false 00:11:41.890 }, 00:11:41.890 "memory_domains": [ 00:11:41.890 { 00:11:41.890 "dma_device_id": "system", 00:11:41.890 "dma_device_type": 1 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.890 "dma_device_type": 2 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "dma_device_id": "system", 00:11:41.890 "dma_device_type": 1 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.890 "dma_device_type": 2 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "dma_device_id": "system", 00:11:41.890 "dma_device_type": 1 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.890 "dma_device_type": 2 00:11:41.890 } 00:11:41.890 ], 00:11:41.890 "driver_specific": { 00:11:41.890 "raid": { 00:11:41.890 "uuid": "da41aa2b-0904-4106-976f-e39da50b4a23", 00:11:41.890 "strip_size_kb": 64, 00:11:41.890 "state": "online", 00:11:41.890 "raid_level": "concat", 00:11:41.890 "superblock": true, 00:11:41.890 "num_base_bdevs": 3, 00:11:41.890 "num_base_bdevs_discovered": 3, 00:11:41.890 "num_base_bdevs_operational": 3, 00:11:41.890 "base_bdevs_list": [ 00:11:41.890 { 00:11:41.890 "name": "BaseBdev1", 00:11:41.890 "uuid": "a0a062b7-630c-447c-a910-4f6e8d79b6ad", 00:11:41.890 "is_configured": true, 00:11:41.890 "data_offset": 2048, 00:11:41.890 "data_size": 63488 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "name": "BaseBdev2", 00:11:41.890 "uuid": "c3f22e3c-e05f-4652-8187-4e85151f6554", 00:11:41.890 "is_configured": true, 00:11:41.890 "data_offset": 2048, 00:11:41.890 "data_size": 63488 00:11:41.890 }, 00:11:41.890 { 00:11:41.890 "name": "BaseBdev3", 00:11:41.890 "uuid": "9be27ffe-f494-4507-88e9-9d1a3b87da39", 00:11:41.890 "is_configured": true, 00:11:41.890 "data_offset": 2048, 00:11:41.890 "data_size": 63488 00:11:41.890 } 00:11:41.890 ] 00:11:41.890 } 00:11:41.890 } 00:11:41.890 }' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:41.890 BaseBdev2 00:11:41.890 BaseBdev3' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.890 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.890 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.890 [2024-10-01 13:45:52.049996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.890 [2024-10-01 13:45:52.050138] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.890 [2024-10-01 13:45:52.050291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.150 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.151 "name": "Existed_Raid", 00:11:42.151 "uuid": "da41aa2b-0904-4106-976f-e39da50b4a23", 00:11:42.151 "strip_size_kb": 64, 00:11:42.151 "state": "offline", 00:11:42.151 "raid_level": "concat", 00:11:42.151 "superblock": true, 00:11:42.151 "num_base_bdevs": 3, 00:11:42.151 "num_base_bdevs_discovered": 2, 00:11:42.151 "num_base_bdevs_operational": 2, 00:11:42.151 "base_bdevs_list": [ 00:11:42.151 { 00:11:42.151 "name": null, 00:11:42.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.151 "is_configured": false, 00:11:42.151 "data_offset": 0, 00:11:42.151 "data_size": 63488 00:11:42.151 }, 00:11:42.151 { 00:11:42.151 "name": "BaseBdev2", 00:11:42.151 "uuid": "c3f22e3c-e05f-4652-8187-4e85151f6554", 00:11:42.151 "is_configured": true, 00:11:42.151 "data_offset": 2048, 00:11:42.151 "data_size": 63488 00:11:42.151 }, 00:11:42.151 { 00:11:42.151 "name": "BaseBdev3", 00:11:42.151 "uuid": "9be27ffe-f494-4507-88e9-9d1a3b87da39", 00:11:42.151 "is_configured": true, 00:11:42.151 "data_offset": 2048, 00:11:42.151 "data_size": 63488 00:11:42.151 } 00:11:42.151 ] 00:11:42.151 }' 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.151 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.411 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.411 [2024-10-01 13:45:52.594043] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.670 [2024-10-01 13:45:52.745304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:42.670 [2024-10-01 13:45:52.745528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.670 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.930 BaseBdev2 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.930 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.930 [ 00:11:42.930 { 00:11:42.930 "name": "BaseBdev2", 00:11:42.930 "aliases": [ 00:11:42.930 "2cb9bf2f-8804-4167-978d-74719dc9e4fa" 00:11:42.930 ], 00:11:42.930 "product_name": "Malloc disk", 00:11:42.930 "block_size": 512, 00:11:42.930 "num_blocks": 65536, 00:11:42.930 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:42.930 "assigned_rate_limits": { 00:11:42.930 "rw_ios_per_sec": 0, 00:11:42.930 "rw_mbytes_per_sec": 0, 00:11:42.930 "r_mbytes_per_sec": 0, 00:11:42.930 "w_mbytes_per_sec": 0 00:11:42.930 }, 00:11:42.930 "claimed": false, 00:11:42.930 "zoned": false, 00:11:42.930 "supported_io_types": { 00:11:42.930 "read": true, 00:11:42.930 "write": true, 00:11:42.930 "unmap": true, 00:11:42.930 "flush": true, 00:11:42.930 "reset": true, 00:11:42.930 "nvme_admin": false, 00:11:42.930 "nvme_io": false, 00:11:42.930 "nvme_io_md": false, 00:11:42.930 "write_zeroes": true, 00:11:42.930 "zcopy": true, 00:11:42.930 "get_zone_info": false, 00:11:42.930 "zone_management": false, 00:11:42.930 "zone_append": false, 00:11:42.930 "compare": false, 00:11:42.930 "compare_and_write": false, 00:11:42.930 "abort": true, 00:11:42.930 "seek_hole": false, 00:11:42.930 "seek_data": false, 00:11:42.930 "copy": true, 00:11:42.930 "nvme_iov_md": false 00:11:42.930 }, 00:11:42.930 "memory_domains": [ 00:11:42.931 { 00:11:42.931 "dma_device_id": "system", 00:11:42.931 "dma_device_type": 1 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.931 "dma_device_type": 2 00:11:42.931 } 00:11:42.931 ], 00:11:42.931 "driver_specific": {} 00:11:42.931 } 00:11:42.931 ] 00:11:42.931 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.931 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:42.931 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:42.931 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:42.931 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:42.931 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.931 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.931 BaseBdev3 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.931 [ 00:11:42.931 { 00:11:42.931 "name": "BaseBdev3", 00:11:42.931 "aliases": [ 00:11:42.931 "10334e27-f72a-42aa-89ca-6a817df498aa" 00:11:42.931 ], 00:11:42.931 "product_name": "Malloc disk", 00:11:42.931 "block_size": 512, 00:11:42.931 "num_blocks": 65536, 00:11:42.931 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:42.931 "assigned_rate_limits": { 00:11:42.931 "rw_ios_per_sec": 0, 00:11:42.931 "rw_mbytes_per_sec": 0, 00:11:42.931 "r_mbytes_per_sec": 0, 00:11:42.931 "w_mbytes_per_sec": 0 00:11:42.931 }, 00:11:42.931 "claimed": false, 00:11:42.931 "zoned": false, 00:11:42.931 "supported_io_types": { 00:11:42.931 "read": true, 00:11:42.931 "write": true, 00:11:42.931 "unmap": true, 00:11:42.931 "flush": true, 00:11:42.931 "reset": true, 00:11:42.931 "nvme_admin": false, 00:11:42.931 "nvme_io": false, 00:11:42.931 "nvme_io_md": false, 00:11:42.931 "write_zeroes": true, 00:11:42.931 "zcopy": true, 00:11:42.931 "get_zone_info": false, 00:11:42.931 "zone_management": false, 00:11:42.931 "zone_append": false, 00:11:42.931 "compare": false, 00:11:42.931 "compare_and_write": false, 00:11:42.931 "abort": true, 00:11:42.931 "seek_hole": false, 00:11:42.931 "seek_data": false, 00:11:42.931 "copy": true, 00:11:42.931 "nvme_iov_md": false 00:11:42.931 }, 00:11:42.931 "memory_domains": [ 00:11:42.931 { 00:11:42.931 "dma_device_id": "system", 00:11:42.931 "dma_device_type": 1 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.931 "dma_device_type": 2 00:11:42.931 } 00:11:42.931 ], 00:11:42.931 "driver_specific": {} 00:11:42.931 } 00:11:42.931 ] 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.931 [2024-10-01 13:45:53.074905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.931 [2024-10-01 13:45:53.075078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.931 [2024-10-01 13:45:53.075328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.931 [2024-10-01 13:45:53.077628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.931 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.190 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.190 "name": "Existed_Raid", 00:11:43.190 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:43.190 "strip_size_kb": 64, 00:11:43.190 "state": "configuring", 00:11:43.190 "raid_level": "concat", 00:11:43.190 "superblock": true, 00:11:43.190 "num_base_bdevs": 3, 00:11:43.190 "num_base_bdevs_discovered": 2, 00:11:43.190 "num_base_bdevs_operational": 3, 00:11:43.190 "base_bdevs_list": [ 00:11:43.190 { 00:11:43.190 "name": "BaseBdev1", 00:11:43.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.190 "is_configured": false, 00:11:43.190 "data_offset": 0, 00:11:43.190 "data_size": 0 00:11:43.190 }, 00:11:43.190 { 00:11:43.190 "name": "BaseBdev2", 00:11:43.190 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:43.190 "is_configured": true, 00:11:43.190 "data_offset": 2048, 00:11:43.190 "data_size": 63488 00:11:43.190 }, 00:11:43.190 { 00:11:43.190 "name": "BaseBdev3", 00:11:43.190 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:43.190 "is_configured": true, 00:11:43.190 "data_offset": 2048, 00:11:43.190 "data_size": 63488 00:11:43.190 } 00:11:43.190 ] 00:11:43.190 }' 00:11:43.190 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.190 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.450 [2024-10-01 13:45:53.530285] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.450 "name": "Existed_Raid", 00:11:43.450 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:43.450 "strip_size_kb": 64, 00:11:43.450 "state": "configuring", 00:11:43.450 "raid_level": "concat", 00:11:43.450 "superblock": true, 00:11:43.450 "num_base_bdevs": 3, 00:11:43.450 "num_base_bdevs_discovered": 1, 00:11:43.450 "num_base_bdevs_operational": 3, 00:11:43.450 "base_bdevs_list": [ 00:11:43.450 { 00:11:43.450 "name": "BaseBdev1", 00:11:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.450 "is_configured": false, 00:11:43.450 "data_offset": 0, 00:11:43.450 "data_size": 0 00:11:43.450 }, 00:11:43.450 { 00:11:43.450 "name": null, 00:11:43.450 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:43.450 "is_configured": false, 00:11:43.450 "data_offset": 0, 00:11:43.450 "data_size": 63488 00:11:43.450 }, 00:11:43.450 { 00:11:43.450 "name": "BaseBdev3", 00:11:43.450 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:43.450 "is_configured": true, 00:11:43.450 "data_offset": 2048, 00:11:43.450 "data_size": 63488 00:11:43.450 } 00:11:43.450 ] 00:11:43.450 }' 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.450 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.018 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.018 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.018 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 [2024-10-01 13:45:54.052044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.018 BaseBdev1 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.018 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.019 [ 00:11:44.019 { 00:11:44.019 "name": "BaseBdev1", 00:11:44.019 "aliases": [ 00:11:44.019 "f681cf99-1f3a-4f6a-a0b3-73ab899f1138" 00:11:44.019 ], 00:11:44.019 "product_name": "Malloc disk", 00:11:44.019 "block_size": 512, 00:11:44.019 "num_blocks": 65536, 00:11:44.019 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:44.019 "assigned_rate_limits": { 00:11:44.019 "rw_ios_per_sec": 0, 00:11:44.019 "rw_mbytes_per_sec": 0, 00:11:44.019 "r_mbytes_per_sec": 0, 00:11:44.019 "w_mbytes_per_sec": 0 00:11:44.019 }, 00:11:44.019 "claimed": true, 00:11:44.019 "claim_type": "exclusive_write", 00:11:44.019 "zoned": false, 00:11:44.019 "supported_io_types": { 00:11:44.019 "read": true, 00:11:44.019 "write": true, 00:11:44.019 "unmap": true, 00:11:44.019 "flush": true, 00:11:44.019 "reset": true, 00:11:44.019 "nvme_admin": false, 00:11:44.019 "nvme_io": false, 00:11:44.019 "nvme_io_md": false, 00:11:44.019 "write_zeroes": true, 00:11:44.019 "zcopy": true, 00:11:44.019 "get_zone_info": false, 00:11:44.019 "zone_management": false, 00:11:44.019 "zone_append": false, 00:11:44.019 "compare": false, 00:11:44.019 "compare_and_write": false, 00:11:44.019 "abort": true, 00:11:44.019 "seek_hole": false, 00:11:44.019 "seek_data": false, 00:11:44.019 "copy": true, 00:11:44.019 "nvme_iov_md": false 00:11:44.019 }, 00:11:44.019 "memory_domains": [ 00:11:44.019 { 00:11:44.019 "dma_device_id": "system", 00:11:44.019 "dma_device_type": 1 00:11:44.019 }, 00:11:44.019 { 00:11:44.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.019 "dma_device_type": 2 00:11:44.019 } 00:11:44.019 ], 00:11:44.019 "driver_specific": {} 00:11:44.019 } 00:11:44.019 ] 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.019 "name": "Existed_Raid", 00:11:44.019 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:44.019 "strip_size_kb": 64, 00:11:44.019 "state": "configuring", 00:11:44.019 "raid_level": "concat", 00:11:44.019 "superblock": true, 00:11:44.019 "num_base_bdevs": 3, 00:11:44.019 "num_base_bdevs_discovered": 2, 00:11:44.019 "num_base_bdevs_operational": 3, 00:11:44.019 "base_bdevs_list": [ 00:11:44.019 { 00:11:44.019 "name": "BaseBdev1", 00:11:44.019 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:44.019 "is_configured": true, 00:11:44.019 "data_offset": 2048, 00:11:44.019 "data_size": 63488 00:11:44.019 }, 00:11:44.019 { 00:11:44.019 "name": null, 00:11:44.019 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:44.019 "is_configured": false, 00:11:44.019 "data_offset": 0, 00:11:44.019 "data_size": 63488 00:11:44.019 }, 00:11:44.019 { 00:11:44.019 "name": "BaseBdev3", 00:11:44.019 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:44.019 "is_configured": true, 00:11:44.019 "data_offset": 2048, 00:11:44.019 "data_size": 63488 00:11:44.019 } 00:11:44.019 ] 00:11:44.019 }' 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.019 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.588 [2024-10-01 13:45:54.535574] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.588 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.588 "name": "Existed_Raid", 00:11:44.588 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:44.588 "strip_size_kb": 64, 00:11:44.588 "state": "configuring", 00:11:44.588 "raid_level": "concat", 00:11:44.588 "superblock": true, 00:11:44.588 "num_base_bdevs": 3, 00:11:44.588 "num_base_bdevs_discovered": 1, 00:11:44.588 "num_base_bdevs_operational": 3, 00:11:44.588 "base_bdevs_list": [ 00:11:44.588 { 00:11:44.588 "name": "BaseBdev1", 00:11:44.588 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:44.588 "is_configured": true, 00:11:44.588 "data_offset": 2048, 00:11:44.588 "data_size": 63488 00:11:44.588 }, 00:11:44.588 { 00:11:44.588 "name": null, 00:11:44.588 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:44.589 "is_configured": false, 00:11:44.589 "data_offset": 0, 00:11:44.589 "data_size": 63488 00:11:44.589 }, 00:11:44.589 { 00:11:44.589 "name": null, 00:11:44.589 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:44.589 "is_configured": false, 00:11:44.589 "data_offset": 0, 00:11:44.589 "data_size": 63488 00:11:44.589 } 00:11:44.589 ] 00:11:44.589 }' 00:11:44.589 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.589 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.848 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.849 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:44.849 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.849 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.849 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.849 [2024-10-01 13:45:55.011473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.849 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.108 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.108 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.108 "name": "Existed_Raid", 00:11:45.108 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:45.108 "strip_size_kb": 64, 00:11:45.108 "state": "configuring", 00:11:45.108 "raid_level": "concat", 00:11:45.108 "superblock": true, 00:11:45.108 "num_base_bdevs": 3, 00:11:45.108 "num_base_bdevs_discovered": 2, 00:11:45.108 "num_base_bdevs_operational": 3, 00:11:45.108 "base_bdevs_list": [ 00:11:45.108 { 00:11:45.108 "name": "BaseBdev1", 00:11:45.108 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:45.108 "is_configured": true, 00:11:45.108 "data_offset": 2048, 00:11:45.108 "data_size": 63488 00:11:45.108 }, 00:11:45.108 { 00:11:45.108 "name": null, 00:11:45.108 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:45.108 "is_configured": false, 00:11:45.108 "data_offset": 0, 00:11:45.108 "data_size": 63488 00:11:45.108 }, 00:11:45.108 { 00:11:45.108 "name": "BaseBdev3", 00:11:45.108 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:45.108 "is_configured": true, 00:11:45.108 "data_offset": 2048, 00:11:45.108 "data_size": 63488 00:11:45.108 } 00:11:45.108 ] 00:11:45.108 }' 00:11:45.108 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.108 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.368 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.368 [2024-10-01 13:45:55.490790] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.665 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.665 "name": "Existed_Raid", 00:11:45.665 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:45.665 "strip_size_kb": 64, 00:11:45.665 "state": "configuring", 00:11:45.665 "raid_level": "concat", 00:11:45.665 "superblock": true, 00:11:45.665 "num_base_bdevs": 3, 00:11:45.665 "num_base_bdevs_discovered": 1, 00:11:45.665 "num_base_bdevs_operational": 3, 00:11:45.665 "base_bdevs_list": [ 00:11:45.665 { 00:11:45.665 "name": null, 00:11:45.665 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:45.665 "is_configured": false, 00:11:45.665 "data_offset": 0, 00:11:45.665 "data_size": 63488 00:11:45.665 }, 00:11:45.665 { 00:11:45.665 "name": null, 00:11:45.665 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:45.665 "is_configured": false, 00:11:45.665 "data_offset": 0, 00:11:45.665 "data_size": 63488 00:11:45.665 }, 00:11:45.665 { 00:11:45.666 "name": "BaseBdev3", 00:11:45.666 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:45.666 "is_configured": true, 00:11:45.666 "data_offset": 2048, 00:11:45.666 "data_size": 63488 00:11:45.666 } 00:11:45.666 ] 00:11:45.666 }' 00:11:45.666 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.666 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.924 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.924 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.924 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.924 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:45.924 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.924 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.925 [2024-10-01 13:45:56.083743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.925 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.184 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.184 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.184 "name": "Existed_Raid", 00:11:46.184 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:46.184 "strip_size_kb": 64, 00:11:46.184 "state": "configuring", 00:11:46.184 "raid_level": "concat", 00:11:46.184 "superblock": true, 00:11:46.184 "num_base_bdevs": 3, 00:11:46.184 "num_base_bdevs_discovered": 2, 00:11:46.184 "num_base_bdevs_operational": 3, 00:11:46.184 "base_bdevs_list": [ 00:11:46.184 { 00:11:46.184 "name": null, 00:11:46.184 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:46.184 "is_configured": false, 00:11:46.184 "data_offset": 0, 00:11:46.184 "data_size": 63488 00:11:46.184 }, 00:11:46.184 { 00:11:46.184 "name": "BaseBdev2", 00:11:46.184 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:46.184 "is_configured": true, 00:11:46.184 "data_offset": 2048, 00:11:46.184 "data_size": 63488 00:11:46.184 }, 00:11:46.184 { 00:11:46.184 "name": "BaseBdev3", 00:11:46.184 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:46.184 "is_configured": true, 00:11:46.184 "data_offset": 2048, 00:11:46.184 "data_size": 63488 00:11:46.184 } 00:11:46.184 ] 00:11:46.184 }' 00:11:46.184 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.184 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f681cf99-1f3a-4f6a-a0b3-73ab899f1138 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 [2024-10-01 13:45:56.629477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:46.443 NewBaseBdev 00:11:46.443 [2024-10-01 13:45:56.629927] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:46.443 [2024-10-01 13:45:56.629954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:46.443 [2024-10-01 13:45:56.630217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:46.443 [2024-10-01 13:45:56.630376] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:46.443 [2024-10-01 13:45:56.630387] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:46.443 [2024-10-01 13:45:56.630564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.443 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.702 [ 00:11:46.702 { 00:11:46.702 "name": "NewBaseBdev", 00:11:46.702 "aliases": [ 00:11:46.702 "f681cf99-1f3a-4f6a-a0b3-73ab899f1138" 00:11:46.702 ], 00:11:46.702 "product_name": "Malloc disk", 00:11:46.702 "block_size": 512, 00:11:46.702 "num_blocks": 65536, 00:11:46.702 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:46.702 "assigned_rate_limits": { 00:11:46.702 "rw_ios_per_sec": 0, 00:11:46.702 "rw_mbytes_per_sec": 0, 00:11:46.702 "r_mbytes_per_sec": 0, 00:11:46.702 "w_mbytes_per_sec": 0 00:11:46.702 }, 00:11:46.702 "claimed": true, 00:11:46.702 "claim_type": "exclusive_write", 00:11:46.702 "zoned": false, 00:11:46.702 "supported_io_types": { 00:11:46.702 "read": true, 00:11:46.702 "write": true, 00:11:46.702 "unmap": true, 00:11:46.702 "flush": true, 00:11:46.702 "reset": true, 00:11:46.702 "nvme_admin": false, 00:11:46.702 "nvme_io": false, 00:11:46.702 "nvme_io_md": false, 00:11:46.702 "write_zeroes": true, 00:11:46.702 "zcopy": true, 00:11:46.702 "get_zone_info": false, 00:11:46.702 "zone_management": false, 00:11:46.702 "zone_append": false, 00:11:46.702 "compare": false, 00:11:46.702 "compare_and_write": false, 00:11:46.702 "abort": true, 00:11:46.702 "seek_hole": false, 00:11:46.702 "seek_data": false, 00:11:46.702 "copy": true, 00:11:46.702 "nvme_iov_md": false 00:11:46.702 }, 00:11:46.702 "memory_domains": [ 00:11:46.702 { 00:11:46.702 "dma_device_id": "system", 00:11:46.702 "dma_device_type": 1 00:11:46.702 }, 00:11:46.702 { 00:11:46.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.702 "dma_device_type": 2 00:11:46.702 } 00:11:46.702 ], 00:11:46.702 "driver_specific": {} 00:11:46.702 } 00:11:46.702 ] 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.702 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.702 "name": "Existed_Raid", 00:11:46.702 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:46.702 "strip_size_kb": 64, 00:11:46.702 "state": "online", 00:11:46.702 "raid_level": "concat", 00:11:46.702 "superblock": true, 00:11:46.702 "num_base_bdevs": 3, 00:11:46.702 "num_base_bdevs_discovered": 3, 00:11:46.702 "num_base_bdevs_operational": 3, 00:11:46.702 "base_bdevs_list": [ 00:11:46.702 { 00:11:46.702 "name": "NewBaseBdev", 00:11:46.702 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:46.702 "is_configured": true, 00:11:46.702 "data_offset": 2048, 00:11:46.702 "data_size": 63488 00:11:46.702 }, 00:11:46.702 { 00:11:46.702 "name": "BaseBdev2", 00:11:46.702 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:46.702 "is_configured": true, 00:11:46.702 "data_offset": 2048, 00:11:46.703 "data_size": 63488 00:11:46.703 }, 00:11:46.703 { 00:11:46.703 "name": "BaseBdev3", 00:11:46.703 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:46.703 "is_configured": true, 00:11:46.703 "data_offset": 2048, 00:11:46.703 "data_size": 63488 00:11:46.703 } 00:11:46.703 ] 00:11:46.703 }' 00:11:46.703 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.703 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.961 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.961 [2024-10-01 13:45:57.145107] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.220 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.220 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.220 "name": "Existed_Raid", 00:11:47.220 "aliases": [ 00:11:47.220 "7e656dea-8213-48a2-a775-dee104229e29" 00:11:47.220 ], 00:11:47.220 "product_name": "Raid Volume", 00:11:47.220 "block_size": 512, 00:11:47.220 "num_blocks": 190464, 00:11:47.220 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:47.220 "assigned_rate_limits": { 00:11:47.220 "rw_ios_per_sec": 0, 00:11:47.220 "rw_mbytes_per_sec": 0, 00:11:47.220 "r_mbytes_per_sec": 0, 00:11:47.220 "w_mbytes_per_sec": 0 00:11:47.220 }, 00:11:47.220 "claimed": false, 00:11:47.220 "zoned": false, 00:11:47.220 "supported_io_types": { 00:11:47.220 "read": true, 00:11:47.220 "write": true, 00:11:47.220 "unmap": true, 00:11:47.220 "flush": true, 00:11:47.220 "reset": true, 00:11:47.220 "nvme_admin": false, 00:11:47.220 "nvme_io": false, 00:11:47.220 "nvme_io_md": false, 00:11:47.220 "write_zeroes": true, 00:11:47.220 "zcopy": false, 00:11:47.220 "get_zone_info": false, 00:11:47.220 "zone_management": false, 00:11:47.220 "zone_append": false, 00:11:47.220 "compare": false, 00:11:47.220 "compare_and_write": false, 00:11:47.220 "abort": false, 00:11:47.220 "seek_hole": false, 00:11:47.220 "seek_data": false, 00:11:47.220 "copy": false, 00:11:47.220 "nvme_iov_md": false 00:11:47.220 }, 00:11:47.220 "memory_domains": [ 00:11:47.220 { 00:11:47.220 "dma_device_id": "system", 00:11:47.220 "dma_device_type": 1 00:11:47.220 }, 00:11:47.220 { 00:11:47.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.220 "dma_device_type": 2 00:11:47.220 }, 00:11:47.220 { 00:11:47.220 "dma_device_id": "system", 00:11:47.220 "dma_device_type": 1 00:11:47.220 }, 00:11:47.220 { 00:11:47.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.220 "dma_device_type": 2 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "dma_device_id": "system", 00:11:47.221 "dma_device_type": 1 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.221 "dma_device_type": 2 00:11:47.221 } 00:11:47.221 ], 00:11:47.221 "driver_specific": { 00:11:47.221 "raid": { 00:11:47.221 "uuid": "7e656dea-8213-48a2-a775-dee104229e29", 00:11:47.221 "strip_size_kb": 64, 00:11:47.221 "state": "online", 00:11:47.221 "raid_level": "concat", 00:11:47.221 "superblock": true, 00:11:47.221 "num_base_bdevs": 3, 00:11:47.221 "num_base_bdevs_discovered": 3, 00:11:47.221 "num_base_bdevs_operational": 3, 00:11:47.221 "base_bdevs_list": [ 00:11:47.221 { 00:11:47.221 "name": "NewBaseBdev", 00:11:47.221 "uuid": "f681cf99-1f3a-4f6a-a0b3-73ab899f1138", 00:11:47.221 "is_configured": true, 00:11:47.221 "data_offset": 2048, 00:11:47.221 "data_size": 63488 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "name": "BaseBdev2", 00:11:47.221 "uuid": "2cb9bf2f-8804-4167-978d-74719dc9e4fa", 00:11:47.221 "is_configured": true, 00:11:47.221 "data_offset": 2048, 00:11:47.221 "data_size": 63488 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "name": "BaseBdev3", 00:11:47.221 "uuid": "10334e27-f72a-42aa-89ca-6a817df498aa", 00:11:47.221 "is_configured": true, 00:11:47.221 "data_offset": 2048, 00:11:47.221 "data_size": 63488 00:11:47.221 } 00:11:47.221 ] 00:11:47.221 } 00:11:47.221 } 00:11:47.221 }' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:47.221 BaseBdev2 00:11:47.221 BaseBdev3' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.221 [2024-10-01 13:45:57.396461] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.221 [2024-10-01 13:45:57.396588] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.221 [2024-10-01 13:45:57.396735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.221 [2024-10-01 13:45:57.396823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.221 [2024-10-01 13:45:57.396943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66130 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66130 ']' 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66130 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:47.221 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.480 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66130 00:11:47.481 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:47.481 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:47.481 killing process with pid 66130 00:11:47.481 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66130' 00:11:47.481 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66130 00:11:47.481 [2024-10-01 13:45:57.449031] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.481 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66130 00:11:47.739 [2024-10-01 13:45:57.754530] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.115 13:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:49.115 00:11:49.115 real 0m10.782s 00:11:49.115 user 0m17.015s 00:11:49.115 sys 0m2.138s 00:11:49.115 13:45:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.115 13:45:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 ************************************ 00:11:49.115 END TEST raid_state_function_test_sb 00:11:49.115 ************************************ 00:11:49.115 13:45:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:49.115 13:45:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.115 13:45:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.115 13:45:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.115 ************************************ 00:11:49.115 START TEST raid_superblock_test 00:11:49.115 ************************************ 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:49.115 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66756 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66756 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66756 ']' 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.116 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.116 [2024-10-01 13:45:59.210924] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:49.116 [2024-10-01 13:45:59.211120] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66756 ] 00:11:49.373 [2024-10-01 13:45:59.376222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.633 [2024-10-01 13:45:59.592018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.633 [2024-10-01 13:45:59.798254] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.633 [2024-10-01 13:45:59.798296] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.892 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.152 malloc1 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.152 [2024-10-01 13:46:00.109739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.152 [2024-10-01 13:46:00.109955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.152 [2024-10-01 13:46:00.110019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:50.152 [2024-10-01 13:46:00.110103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.152 [2024-10-01 13:46:00.112728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.152 [2024-10-01 13:46:00.112868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:50.152 pt1 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.152 malloc2 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:50.152 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.153 [2024-10-01 13:46:00.181582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:50.153 [2024-10-01 13:46:00.181788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.153 [2024-10-01 13:46:00.181853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:50.153 [2024-10-01 13:46:00.181928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.153 [2024-10-01 13:46:00.184451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.153 pt2 00:11:50.153 [2024-10-01 13:46:00.184591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.153 malloc3 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.153 [2024-10-01 13:46:00.238090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:50.153 [2024-10-01 13:46:00.238166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.153 [2024-10-01 13:46:00.238192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:50.153 [2024-10-01 13:46:00.238204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.153 [2024-10-01 13:46:00.240692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.153 [2024-10-01 13:46:00.240733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:50.153 pt3 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.153 [2024-10-01 13:46:00.250148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.153 [2024-10-01 13:46:00.252421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.153 [2024-10-01 13:46:00.252612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:50.153 [2024-10-01 13:46:00.252833] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:50.153 [2024-10-01 13:46:00.253050] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:50.153 [2024-10-01 13:46:00.253375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:50.153 [2024-10-01 13:46:00.253610] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:50.153 [2024-10-01 13:46:00.253652] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:50.153 [2024-10-01 13:46:00.253993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.153 "name": "raid_bdev1", 00:11:50.153 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:50.153 "strip_size_kb": 64, 00:11:50.153 "state": "online", 00:11:50.153 "raid_level": "concat", 00:11:50.153 "superblock": true, 00:11:50.153 "num_base_bdevs": 3, 00:11:50.153 "num_base_bdevs_discovered": 3, 00:11:50.153 "num_base_bdevs_operational": 3, 00:11:50.153 "base_bdevs_list": [ 00:11:50.153 { 00:11:50.153 "name": "pt1", 00:11:50.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.153 "is_configured": true, 00:11:50.153 "data_offset": 2048, 00:11:50.153 "data_size": 63488 00:11:50.153 }, 00:11:50.153 { 00:11:50.153 "name": "pt2", 00:11:50.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.153 "is_configured": true, 00:11:50.153 "data_offset": 2048, 00:11:50.153 "data_size": 63488 00:11:50.153 }, 00:11:50.153 { 00:11:50.153 "name": "pt3", 00:11:50.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.153 "is_configured": true, 00:11:50.153 "data_offset": 2048, 00:11:50.153 "data_size": 63488 00:11:50.153 } 00:11:50.153 ] 00:11:50.153 }' 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.153 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.723 [2024-10-01 13:46:00.633865] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.723 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.723 "name": "raid_bdev1", 00:11:50.723 "aliases": [ 00:11:50.723 "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f" 00:11:50.723 ], 00:11:50.723 "product_name": "Raid Volume", 00:11:50.723 "block_size": 512, 00:11:50.723 "num_blocks": 190464, 00:11:50.723 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:50.723 "assigned_rate_limits": { 00:11:50.723 "rw_ios_per_sec": 0, 00:11:50.723 "rw_mbytes_per_sec": 0, 00:11:50.723 "r_mbytes_per_sec": 0, 00:11:50.723 "w_mbytes_per_sec": 0 00:11:50.723 }, 00:11:50.723 "claimed": false, 00:11:50.723 "zoned": false, 00:11:50.723 "supported_io_types": { 00:11:50.723 "read": true, 00:11:50.723 "write": true, 00:11:50.723 "unmap": true, 00:11:50.723 "flush": true, 00:11:50.723 "reset": true, 00:11:50.723 "nvme_admin": false, 00:11:50.723 "nvme_io": false, 00:11:50.723 "nvme_io_md": false, 00:11:50.723 "write_zeroes": true, 00:11:50.723 "zcopy": false, 00:11:50.723 "get_zone_info": false, 00:11:50.723 "zone_management": false, 00:11:50.723 "zone_append": false, 00:11:50.723 "compare": false, 00:11:50.724 "compare_and_write": false, 00:11:50.724 "abort": false, 00:11:50.724 "seek_hole": false, 00:11:50.724 "seek_data": false, 00:11:50.724 "copy": false, 00:11:50.724 "nvme_iov_md": false 00:11:50.724 }, 00:11:50.724 "memory_domains": [ 00:11:50.724 { 00:11:50.724 "dma_device_id": "system", 00:11:50.724 "dma_device_type": 1 00:11:50.724 }, 00:11:50.724 { 00:11:50.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.724 "dma_device_type": 2 00:11:50.724 }, 00:11:50.724 { 00:11:50.724 "dma_device_id": "system", 00:11:50.724 "dma_device_type": 1 00:11:50.724 }, 00:11:50.724 { 00:11:50.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.724 "dma_device_type": 2 00:11:50.724 }, 00:11:50.724 { 00:11:50.724 "dma_device_id": "system", 00:11:50.724 "dma_device_type": 1 00:11:50.724 }, 00:11:50.724 { 00:11:50.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.724 "dma_device_type": 2 00:11:50.724 } 00:11:50.724 ], 00:11:50.724 "driver_specific": { 00:11:50.724 "raid": { 00:11:50.724 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:50.724 "strip_size_kb": 64, 00:11:50.724 "state": "online", 00:11:50.724 "raid_level": "concat", 00:11:50.724 "superblock": true, 00:11:50.724 "num_base_bdevs": 3, 00:11:50.724 "num_base_bdevs_discovered": 3, 00:11:50.724 "num_base_bdevs_operational": 3, 00:11:50.724 "base_bdevs_list": [ 00:11:50.724 { 00:11:50.724 "name": "pt1", 00:11:50.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.724 "is_configured": true, 00:11:50.724 "data_offset": 2048, 00:11:50.724 "data_size": 63488 00:11:50.724 }, 00:11:50.724 { 00:11:50.724 "name": "pt2", 00:11:50.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.724 "is_configured": true, 00:11:50.724 "data_offset": 2048, 00:11:50.724 "data_size": 63488 00:11:50.724 }, 00:11:50.724 { 00:11:50.724 "name": "pt3", 00:11:50.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.724 "is_configured": true, 00:11:50.724 "data_offset": 2048, 00:11:50.724 "data_size": 63488 00:11:50.724 } 00:11:50.724 ] 00:11:50.724 } 00:11:50.724 } 00:11:50.724 }' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:50.724 pt2 00:11:50.724 pt3' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.724 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 [2024-10-01 13:46:00.913536] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f ']' 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 [2024-10-01 13:46:00.961180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.985 [2024-10-01 13:46:00.961323] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.985 [2024-10-01 13:46:00.961487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.985 [2024-10-01 13:46:00.961585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.985 [2024-10-01 13:46:00.961698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 [2024-10-01 13:46:01.113005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:50.985 [2024-10-01 13:46:01.115219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:50.985 [2024-10-01 13:46:01.115394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:50.985 [2024-10-01 13:46:01.115467] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:50.985 [2024-10-01 13:46:01.115523] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:50.985 [2024-10-01 13:46:01.115546] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:50.985 [2024-10-01 13:46:01.115568] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.985 [2024-10-01 13:46:01.115579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:50.985 request: 00:11:50.985 { 00:11:50.985 "name": "raid_bdev1", 00:11:50.985 "raid_level": "concat", 00:11:50.985 "base_bdevs": [ 00:11:50.985 "malloc1", 00:11:50.985 "malloc2", 00:11:50.985 "malloc3" 00:11:50.985 ], 00:11:50.985 "strip_size_kb": 64, 00:11:50.985 "superblock": false, 00:11:50.985 "method": "bdev_raid_create", 00:11:50.985 "req_id": 1 00:11:50.985 } 00:11:50.985 Got JSON-RPC error response 00:11:50.985 response: 00:11:50.985 { 00:11:50.985 "code": -17, 00:11:50.985 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:50.985 } 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.985 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.985 [2024-10-01 13:46:01.172885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.985 [2024-10-01 13:46:01.173058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.986 [2024-10-01 13:46:01.173118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:50.986 [2024-10-01 13:46:01.173191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.245 [2024-10-01 13:46:01.175639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.245 [2024-10-01 13:46:01.175786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:51.245 [2024-10-01 13:46:01.175900] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:51.245 [2024-10-01 13:46:01.175964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:51.245 pt1 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.245 "name": "raid_bdev1", 00:11:51.245 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:51.245 "strip_size_kb": 64, 00:11:51.245 "state": "configuring", 00:11:51.245 "raid_level": "concat", 00:11:51.245 "superblock": true, 00:11:51.245 "num_base_bdevs": 3, 00:11:51.245 "num_base_bdevs_discovered": 1, 00:11:51.245 "num_base_bdevs_operational": 3, 00:11:51.245 "base_bdevs_list": [ 00:11:51.245 { 00:11:51.245 "name": "pt1", 00:11:51.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.245 "is_configured": true, 00:11:51.245 "data_offset": 2048, 00:11:51.245 "data_size": 63488 00:11:51.245 }, 00:11:51.245 { 00:11:51.245 "name": null, 00:11:51.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.245 "is_configured": false, 00:11:51.245 "data_offset": 2048, 00:11:51.245 "data_size": 63488 00:11:51.245 }, 00:11:51.245 { 00:11:51.245 "name": null, 00:11:51.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.245 "is_configured": false, 00:11:51.245 "data_offset": 2048, 00:11:51.245 "data_size": 63488 00:11:51.245 } 00:11:51.245 ] 00:11:51.245 }' 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.245 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.504 [2024-10-01 13:46:01.652258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:51.504 [2024-10-01 13:46:01.652478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.504 [2024-10-01 13:46:01.652545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:51.504 [2024-10-01 13:46:01.652561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.504 [2024-10-01 13:46:01.653039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.504 [2024-10-01 13:46:01.653059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:51.504 [2024-10-01 13:46:01.653144] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:51.504 [2024-10-01 13:46:01.653168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.504 pt2 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.504 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 [2024-10-01 13:46:01.664255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.505 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.763 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.763 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.763 "name": "raid_bdev1", 00:11:51.763 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:51.763 "strip_size_kb": 64, 00:11:51.763 "state": "configuring", 00:11:51.763 "raid_level": "concat", 00:11:51.763 "superblock": true, 00:11:51.763 "num_base_bdevs": 3, 00:11:51.763 "num_base_bdevs_discovered": 1, 00:11:51.763 "num_base_bdevs_operational": 3, 00:11:51.763 "base_bdevs_list": [ 00:11:51.763 { 00:11:51.763 "name": "pt1", 00:11:51.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.763 "is_configured": true, 00:11:51.763 "data_offset": 2048, 00:11:51.763 "data_size": 63488 00:11:51.763 }, 00:11:51.763 { 00:11:51.763 "name": null, 00:11:51.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.763 "is_configured": false, 00:11:51.763 "data_offset": 0, 00:11:51.763 "data_size": 63488 00:11:51.763 }, 00:11:51.763 { 00:11:51.763 "name": null, 00:11:51.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.763 "is_configured": false, 00:11:51.763 "data_offset": 2048, 00:11:51.763 "data_size": 63488 00:11:51.763 } 00:11:51.763 ] 00:11:51.763 }' 00:11:51.763 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.763 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.022 [2024-10-01 13:46:02.051663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:52.022 [2024-10-01 13:46:02.051890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.022 [2024-10-01 13:46:02.051948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:52.022 [2024-10-01 13:46:02.052076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.022 [2024-10-01 13:46:02.052595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.022 [2024-10-01 13:46:02.052628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:52.022 [2024-10-01 13:46:02.052717] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:52.022 [2024-10-01 13:46:02.052754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.022 pt2 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.022 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.022 [2024-10-01 13:46:02.063657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:52.022 [2024-10-01 13:46:02.063717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.023 [2024-10-01 13:46:02.063737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:52.023 [2024-10-01 13:46:02.063751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.023 [2024-10-01 13:46:02.064181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.023 [2024-10-01 13:46:02.064206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:52.023 [2024-10-01 13:46:02.064278] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:52.023 [2024-10-01 13:46:02.064303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:52.023 [2024-10-01 13:46:02.064441] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:52.023 [2024-10-01 13:46:02.064455] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:52.023 [2024-10-01 13:46:02.064718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:52.023 [2024-10-01 13:46:02.064855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:52.023 [2024-10-01 13:46:02.064864] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:52.023 [2024-10-01 13:46:02.065013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.023 pt3 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.023 "name": "raid_bdev1", 00:11:52.023 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:52.023 "strip_size_kb": 64, 00:11:52.023 "state": "online", 00:11:52.023 "raid_level": "concat", 00:11:52.023 "superblock": true, 00:11:52.023 "num_base_bdevs": 3, 00:11:52.023 "num_base_bdevs_discovered": 3, 00:11:52.023 "num_base_bdevs_operational": 3, 00:11:52.023 "base_bdevs_list": [ 00:11:52.023 { 00:11:52.023 "name": "pt1", 00:11:52.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.023 "is_configured": true, 00:11:52.023 "data_offset": 2048, 00:11:52.023 "data_size": 63488 00:11:52.023 }, 00:11:52.023 { 00:11:52.023 "name": "pt2", 00:11:52.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.023 "is_configured": true, 00:11:52.023 "data_offset": 2048, 00:11:52.023 "data_size": 63488 00:11:52.023 }, 00:11:52.023 { 00:11:52.023 "name": "pt3", 00:11:52.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.023 "is_configured": true, 00:11:52.023 "data_offset": 2048, 00:11:52.023 "data_size": 63488 00:11:52.023 } 00:11:52.023 ] 00:11:52.023 }' 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.023 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.590 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:52.590 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.591 [2024-10-01 13:46:02.503785] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.591 "name": "raid_bdev1", 00:11:52.591 "aliases": [ 00:11:52.591 "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f" 00:11:52.591 ], 00:11:52.591 "product_name": "Raid Volume", 00:11:52.591 "block_size": 512, 00:11:52.591 "num_blocks": 190464, 00:11:52.591 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:52.591 "assigned_rate_limits": { 00:11:52.591 "rw_ios_per_sec": 0, 00:11:52.591 "rw_mbytes_per_sec": 0, 00:11:52.591 "r_mbytes_per_sec": 0, 00:11:52.591 "w_mbytes_per_sec": 0 00:11:52.591 }, 00:11:52.591 "claimed": false, 00:11:52.591 "zoned": false, 00:11:52.591 "supported_io_types": { 00:11:52.591 "read": true, 00:11:52.591 "write": true, 00:11:52.591 "unmap": true, 00:11:52.591 "flush": true, 00:11:52.591 "reset": true, 00:11:52.591 "nvme_admin": false, 00:11:52.591 "nvme_io": false, 00:11:52.591 "nvme_io_md": false, 00:11:52.591 "write_zeroes": true, 00:11:52.591 "zcopy": false, 00:11:52.591 "get_zone_info": false, 00:11:52.591 "zone_management": false, 00:11:52.591 "zone_append": false, 00:11:52.591 "compare": false, 00:11:52.591 "compare_and_write": false, 00:11:52.591 "abort": false, 00:11:52.591 "seek_hole": false, 00:11:52.591 "seek_data": false, 00:11:52.591 "copy": false, 00:11:52.591 "nvme_iov_md": false 00:11:52.591 }, 00:11:52.591 "memory_domains": [ 00:11:52.591 { 00:11:52.591 "dma_device_id": "system", 00:11:52.591 "dma_device_type": 1 00:11:52.591 }, 00:11:52.591 { 00:11:52.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.591 "dma_device_type": 2 00:11:52.591 }, 00:11:52.591 { 00:11:52.591 "dma_device_id": "system", 00:11:52.591 "dma_device_type": 1 00:11:52.591 }, 00:11:52.591 { 00:11:52.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.591 "dma_device_type": 2 00:11:52.591 }, 00:11:52.591 { 00:11:52.591 "dma_device_id": "system", 00:11:52.591 "dma_device_type": 1 00:11:52.591 }, 00:11:52.591 { 00:11:52.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.591 "dma_device_type": 2 00:11:52.591 } 00:11:52.591 ], 00:11:52.591 "driver_specific": { 00:11:52.591 "raid": { 00:11:52.591 "uuid": "6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f", 00:11:52.591 "strip_size_kb": 64, 00:11:52.591 "state": "online", 00:11:52.591 "raid_level": "concat", 00:11:52.591 "superblock": true, 00:11:52.591 "num_base_bdevs": 3, 00:11:52.591 "num_base_bdevs_discovered": 3, 00:11:52.591 "num_base_bdevs_operational": 3, 00:11:52.591 "base_bdevs_list": [ 00:11:52.591 { 00:11:52.591 "name": "pt1", 00:11:52.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.591 "is_configured": true, 00:11:52.591 "data_offset": 2048, 00:11:52.591 "data_size": 63488 00:11:52.591 }, 00:11:52.591 { 00:11:52.591 "name": "pt2", 00:11:52.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.591 "is_configured": true, 00:11:52.591 "data_offset": 2048, 00:11:52.591 "data_size": 63488 00:11:52.591 }, 00:11:52.591 { 00:11:52.591 "name": "pt3", 00:11:52.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.591 "is_configured": true, 00:11:52.591 "data_offset": 2048, 00:11:52.591 "data_size": 63488 00:11:52.591 } 00:11:52.591 ] 00:11:52.591 } 00:11:52.591 } 00:11:52.591 }' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:52.591 pt2 00:11:52.591 pt3' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.591 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.591 [2024-10-01 13:46:02.779623] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f '!=' 6ae5e934-deb1-40bf-8d44-4cdc0c3faa6f ']' 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66756 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66756 ']' 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66756 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66756 00:11:52.850 killing process with pid 66756 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66756' 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66756 00:11:52.850 [2024-10-01 13:46:02.854011] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.850 [2024-10-01 13:46:02.854111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.850 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66756 00:11:52.850 [2024-10-01 13:46:02.854180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.850 [2024-10-01 13:46:02.854194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:53.110 [2024-10-01 13:46:03.166264] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.484 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:54.484 00:11:54.484 real 0m5.336s 00:11:54.484 user 0m7.548s 00:11:54.484 sys 0m1.018s 00:11:54.484 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.484 ************************************ 00:11:54.484 END TEST raid_superblock_test 00:11:54.484 ************************************ 00:11:54.484 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.484 13:46:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:54.484 13:46:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:54.484 13:46:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.484 13:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.484 ************************************ 00:11:54.484 START TEST raid_read_error_test 00:11:54.484 ************************************ 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Hd4UftEezk 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67009 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67009 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67009 ']' 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.484 13:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.484 [2024-10-01 13:46:04.648922] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:54.484 [2024-10-01 13:46:04.649055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67009 ] 00:11:54.743 [2024-10-01 13:46:04.819288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.001 [2024-10-01 13:46:05.034146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.259 [2024-10-01 13:46:05.243567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.259 [2024-10-01 13:46:05.243809] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.519 BaseBdev1_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.519 true 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.519 [2024-10-01 13:46:05.574270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:55.519 [2024-10-01 13:46:05.574466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.519 [2024-10-01 13:46:05.574494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:55.519 [2024-10-01 13:46:05.574508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.519 [2024-10-01 13:46:05.576879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.519 [2024-10-01 13:46:05.576920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:55.519 BaseBdev1 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.519 BaseBdev2_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.519 true 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.519 [2024-10-01 13:46:05.652058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:55.519 [2024-10-01 13:46:05.652122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.519 [2024-10-01 13:46:05.652141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:55.519 [2024-10-01 13:46:05.652155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.519 [2024-10-01 13:46:05.654533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.519 [2024-10-01 13:46:05.654575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:55.519 BaseBdev2 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.519 BaseBdev3_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.519 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.786 true 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.786 [2024-10-01 13:46:05.720985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:55.786 [2024-10-01 13:46:05.721158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.786 [2024-10-01 13:46:05.721187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:55.786 [2024-10-01 13:46:05.721218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.786 [2024-10-01 13:46:05.723697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.786 [2024-10-01 13:46:05.723740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:55.786 BaseBdev3 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.786 [2024-10-01 13:46:05.733086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.786 [2024-10-01 13:46:05.735346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.786 [2024-10-01 13:46:05.735442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.786 [2024-10-01 13:46:05.735644] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:55.786 [2024-10-01 13:46:05.735656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:55.786 [2024-10-01 13:46:05.735953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:55.786 [2024-10-01 13:46:05.736108] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:55.786 [2024-10-01 13:46:05.736122] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:55.786 [2024-10-01 13:46:05.736287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.786 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.787 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.787 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.787 "name": "raid_bdev1", 00:11:55.787 "uuid": "6dbfae27-8c56-4188-8ac1-aa75355eed67", 00:11:55.787 "strip_size_kb": 64, 00:11:55.787 "state": "online", 00:11:55.787 "raid_level": "concat", 00:11:55.787 "superblock": true, 00:11:55.787 "num_base_bdevs": 3, 00:11:55.787 "num_base_bdevs_discovered": 3, 00:11:55.787 "num_base_bdevs_operational": 3, 00:11:55.787 "base_bdevs_list": [ 00:11:55.787 { 00:11:55.787 "name": "BaseBdev1", 00:11:55.787 "uuid": "33056d34-938a-57c9-979a-c74e52db2603", 00:11:55.787 "is_configured": true, 00:11:55.787 "data_offset": 2048, 00:11:55.787 "data_size": 63488 00:11:55.787 }, 00:11:55.787 { 00:11:55.787 "name": "BaseBdev2", 00:11:55.787 "uuid": "2b23815f-3823-5db4-9198-df9d6083ebde", 00:11:55.787 "is_configured": true, 00:11:55.787 "data_offset": 2048, 00:11:55.787 "data_size": 63488 00:11:55.787 }, 00:11:55.787 { 00:11:55.787 "name": "BaseBdev3", 00:11:55.787 "uuid": "07de6a2d-c2b8-53ac-9762-e66b04f9ff79", 00:11:55.787 "is_configured": true, 00:11:55.787 "data_offset": 2048, 00:11:55.787 "data_size": 63488 00:11:55.787 } 00:11:55.787 ] 00:11:55.787 }' 00:11:55.787 13:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.787 13:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.044 13:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:56.044 13:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.044 [2024-10-01 13:46:06.233722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:56.978 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:56.978 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.978 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.978 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.236 "name": "raid_bdev1", 00:11:57.236 "uuid": "6dbfae27-8c56-4188-8ac1-aa75355eed67", 00:11:57.236 "strip_size_kb": 64, 00:11:57.236 "state": "online", 00:11:57.236 "raid_level": "concat", 00:11:57.236 "superblock": true, 00:11:57.236 "num_base_bdevs": 3, 00:11:57.236 "num_base_bdevs_discovered": 3, 00:11:57.236 "num_base_bdevs_operational": 3, 00:11:57.236 "base_bdevs_list": [ 00:11:57.236 { 00:11:57.236 "name": "BaseBdev1", 00:11:57.236 "uuid": "33056d34-938a-57c9-979a-c74e52db2603", 00:11:57.236 "is_configured": true, 00:11:57.236 "data_offset": 2048, 00:11:57.236 "data_size": 63488 00:11:57.236 }, 00:11:57.236 { 00:11:57.236 "name": "BaseBdev2", 00:11:57.236 "uuid": "2b23815f-3823-5db4-9198-df9d6083ebde", 00:11:57.236 "is_configured": true, 00:11:57.236 "data_offset": 2048, 00:11:57.236 "data_size": 63488 00:11:57.236 }, 00:11:57.236 { 00:11:57.236 "name": "BaseBdev3", 00:11:57.236 "uuid": "07de6a2d-c2b8-53ac-9762-e66b04f9ff79", 00:11:57.236 "is_configured": true, 00:11:57.236 "data_offset": 2048, 00:11:57.236 "data_size": 63488 00:11:57.236 } 00:11:57.236 ] 00:11:57.236 }' 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.236 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.495 [2024-10-01 13:46:07.590144] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.495 [2024-10-01 13:46:07.590178] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.495 [2024-10-01 13:46:07.592779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.495 [2024-10-01 13:46:07.592827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.495 [2024-10-01 13:46:07.592865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.495 [2024-10-01 13:46:07.592876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:57.495 { 00:11:57.495 "results": [ 00:11:57.495 { 00:11:57.495 "job": "raid_bdev1", 00:11:57.495 "core_mask": "0x1", 00:11:57.495 "workload": "randrw", 00:11:57.495 "percentage": 50, 00:11:57.495 "status": "finished", 00:11:57.495 "queue_depth": 1, 00:11:57.495 "io_size": 131072, 00:11:57.495 "runtime": 1.356389, 00:11:57.495 "iops": 16052.91697293328, 00:11:57.495 "mibps": 2006.61462161666, 00:11:57.495 "io_failed": 1, 00:11:57.495 "io_timeout": 0, 00:11:57.495 "avg_latency_us": 86.32566133189474, 00:11:57.495 "min_latency_us": 26.936546184738955, 00:11:57.495 "max_latency_us": 1394.9429718875501 00:11:57.495 } 00:11:57.495 ], 00:11:57.495 "core_count": 1 00:11:57.495 } 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67009 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67009 ']' 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67009 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67009 00:11:57.495 killing process with pid 67009 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67009' 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67009 00:11:57.495 [2024-10-01 13:46:07.644111] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.495 13:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67009 00:11:57.753 [2024-10-01 13:46:07.880988] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Hd4UftEezk 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:59.160 ************************************ 00:11:59.160 END TEST raid_read_error_test 00:11:59.160 ************************************ 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:59.160 00:11:59.160 real 0m4.710s 00:11:59.160 user 0m5.485s 00:11:59.160 sys 0m0.655s 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.160 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.160 13:46:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:59.160 13:46:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:59.160 13:46:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.160 13:46:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.160 ************************************ 00:11:59.160 START TEST raid_write_error_test 00:11:59.160 ************************************ 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Bghi43g25Y 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67149 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67149 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67149 ']' 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.160 13:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.418 [2024-10-01 13:46:09.425715] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:59.418 [2024-10-01 13:46:09.425834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67149 ] 00:11:59.418 [2024-10-01 13:46:09.596600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.675 [2024-10-01 13:46:09.811035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.032 [2024-10-01 13:46:10.021513] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.032 [2024-10-01 13:46:10.021581] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 BaseBdev1_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 true 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 [2024-10-01 13:46:10.316447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:00.290 [2024-10-01 13:46:10.316643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.290 [2024-10-01 13:46:10.316701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:00.290 [2024-10-01 13:46:10.316841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.290 [2024-10-01 13:46:10.319284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.290 BaseBdev1 00:12:00.290 [2024-10-01 13:46:10.319463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 BaseBdev2_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 true 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 [2024-10-01 13:46:10.399864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.290 [2024-10-01 13:46:10.400035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.290 [2024-10-01 13:46:10.400062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:00.290 [2024-10-01 13:46:10.400076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.290 [2024-10-01 13:46:10.402408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.290 [2024-10-01 13:46:10.402447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.290 BaseBdev2 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 BaseBdev3_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 true 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.290 [2024-10-01 13:46:10.470966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:00.290 [2024-10-01 13:46:10.471129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.290 [2024-10-01 13:46:10.471181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:00.290 [2024-10-01 13:46:10.471323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.290 [2024-10-01 13:46:10.473821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.290 [2024-10-01 13:46:10.473958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.290 BaseBdev3 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.290 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 [2024-10-01 13:46:10.483035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.549 [2024-10-01 13:46:10.485121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.549 [2024-10-01 13:46:10.485202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.549 [2024-10-01 13:46:10.485391] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:00.549 [2024-10-01 13:46:10.485421] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:00.549 [2024-10-01 13:46:10.485692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:00.549 [2024-10-01 13:46:10.485834] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:00.549 [2024-10-01 13:46:10.485847] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:00.549 [2024-10-01 13:46:10.485986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.549 "name": "raid_bdev1", 00:12:00.549 "uuid": "fcde5d2c-d59e-430f-8c29-aa86eeb29edb", 00:12:00.549 "strip_size_kb": 64, 00:12:00.549 "state": "online", 00:12:00.549 "raid_level": "concat", 00:12:00.549 "superblock": true, 00:12:00.549 "num_base_bdevs": 3, 00:12:00.549 "num_base_bdevs_discovered": 3, 00:12:00.549 "num_base_bdevs_operational": 3, 00:12:00.549 "base_bdevs_list": [ 00:12:00.549 { 00:12:00.549 "name": "BaseBdev1", 00:12:00.549 "uuid": "877817f8-faf8-5a8c-b7e7-3108a14b7920", 00:12:00.549 "is_configured": true, 00:12:00.549 "data_offset": 2048, 00:12:00.549 "data_size": 63488 00:12:00.549 }, 00:12:00.549 { 00:12:00.549 "name": "BaseBdev2", 00:12:00.549 "uuid": "685ae1f2-c0a7-5ad5-9a2e-67414d46991d", 00:12:00.549 "is_configured": true, 00:12:00.549 "data_offset": 2048, 00:12:00.549 "data_size": 63488 00:12:00.549 }, 00:12:00.549 { 00:12:00.549 "name": "BaseBdev3", 00:12:00.549 "uuid": "2e5fa466-5da2-5223-b9af-949d6186c759", 00:12:00.549 "is_configured": true, 00:12:00.549 "data_offset": 2048, 00:12:00.549 "data_size": 63488 00:12:00.549 } 00:12:00.549 ] 00:12:00.549 }' 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.549 13:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.809 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:00.809 13:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:00.809 [2024-10-01 13:46:10.939791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.741 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.742 "name": "raid_bdev1", 00:12:01.742 "uuid": "fcde5d2c-d59e-430f-8c29-aa86eeb29edb", 00:12:01.742 "strip_size_kb": 64, 00:12:01.742 "state": "online", 00:12:01.742 "raid_level": "concat", 00:12:01.742 "superblock": true, 00:12:01.742 "num_base_bdevs": 3, 00:12:01.742 "num_base_bdevs_discovered": 3, 00:12:01.742 "num_base_bdevs_operational": 3, 00:12:01.742 "base_bdevs_list": [ 00:12:01.742 { 00:12:01.742 "name": "BaseBdev1", 00:12:01.742 "uuid": "877817f8-faf8-5a8c-b7e7-3108a14b7920", 00:12:01.742 "is_configured": true, 00:12:01.742 "data_offset": 2048, 00:12:01.742 "data_size": 63488 00:12:01.742 }, 00:12:01.742 { 00:12:01.742 "name": "BaseBdev2", 00:12:01.742 "uuid": "685ae1f2-c0a7-5ad5-9a2e-67414d46991d", 00:12:01.742 "is_configured": true, 00:12:01.742 "data_offset": 2048, 00:12:01.742 "data_size": 63488 00:12:01.742 }, 00:12:01.742 { 00:12:01.742 "name": "BaseBdev3", 00:12:01.742 "uuid": "2e5fa466-5da2-5223-b9af-949d6186c759", 00:12:01.742 "is_configured": true, 00:12:01.742 "data_offset": 2048, 00:12:01.742 "data_size": 63488 00:12:01.742 } 00:12:01.742 ] 00:12:01.742 }' 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.742 13:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.307 13:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.307 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.307 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.307 [2024-10-01 13:46:12.221941] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.307 [2024-10-01 13:46:12.221975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.307 [2024-10-01 13:46:12.224571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.307 [2024-10-01 13:46:12.224620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.308 [2024-10-01 13:46:12.224659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.308 [2024-10-01 13:46:12.224670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:02.308 { 00:12:02.308 "results": [ 00:12:02.308 { 00:12:02.308 "job": "raid_bdev1", 00:12:02.308 "core_mask": "0x1", 00:12:02.308 "workload": "randrw", 00:12:02.308 "percentage": 50, 00:12:02.308 "status": "finished", 00:12:02.308 "queue_depth": 1, 00:12:02.308 "io_size": 131072, 00:12:02.308 "runtime": 1.281961, 00:12:02.308 "iops": 16452.918614528837, 00:12:02.308 "mibps": 2056.6148268161046, 00:12:02.308 "io_failed": 1, 00:12:02.308 "io_timeout": 0, 00:12:02.308 "avg_latency_us": 84.10000782535631, 00:12:02.308 "min_latency_us": 26.936546184738955, 00:12:02.308 "max_latency_us": 1401.5228915662651 00:12:02.308 } 00:12:02.308 ], 00:12:02.308 "core_count": 1 00:12:02.308 } 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67149 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67149 ']' 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67149 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67149 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.308 killing process with pid 67149 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67149' 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67149 00:12:02.308 [2024-10-01 13:46:12.273722] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.308 13:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67149 00:12:02.565 [2024-10-01 13:46:12.507940] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Bghi43g25Y 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:12:03.942 00:12:03.942 real 0m4.537s 00:12:03.942 user 0m5.162s 00:12:03.942 sys 0m0.643s 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.942 13:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.942 ************************************ 00:12:03.942 END TEST raid_write_error_test 00:12:03.942 ************************************ 00:12:03.942 13:46:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:03.942 13:46:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:03.942 13:46:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:03.942 13:46:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.942 13:46:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.942 ************************************ 00:12:03.942 START TEST raid_state_function_test 00:12:03.942 ************************************ 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67298 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67298' 00:12:03.942 Process raid pid: 67298 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67298 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67298 ']' 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.942 13:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.942 [2024-10-01 13:46:14.025497] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:03.942 [2024-10-01 13:46:14.025806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.200 [2024-10-01 13:46:14.196654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.459 [2024-10-01 13:46:14.411033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.459 [2024-10-01 13:46:14.627830] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.459 [2024-10-01 13:46:14.628040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.717 [2024-10-01 13:46:14.865360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.717 [2024-10-01 13:46:14.865417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.717 [2024-10-01 13:46:14.865433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.717 [2024-10-01 13:46:14.865446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.717 [2024-10-01 13:46:14.865457] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.717 [2024-10-01 13:46:14.865471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.717 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.976 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.976 "name": "Existed_Raid", 00:12:04.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.976 "strip_size_kb": 0, 00:12:04.976 "state": "configuring", 00:12:04.976 "raid_level": "raid1", 00:12:04.976 "superblock": false, 00:12:04.976 "num_base_bdevs": 3, 00:12:04.976 "num_base_bdevs_discovered": 0, 00:12:04.976 "num_base_bdevs_operational": 3, 00:12:04.976 "base_bdevs_list": [ 00:12:04.976 { 00:12:04.976 "name": "BaseBdev1", 00:12:04.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.976 "is_configured": false, 00:12:04.976 "data_offset": 0, 00:12:04.976 "data_size": 0 00:12:04.976 }, 00:12:04.976 { 00:12:04.976 "name": "BaseBdev2", 00:12:04.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.976 "is_configured": false, 00:12:04.976 "data_offset": 0, 00:12:04.976 "data_size": 0 00:12:04.976 }, 00:12:04.976 { 00:12:04.976 "name": "BaseBdev3", 00:12:04.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.976 "is_configured": false, 00:12:04.976 "data_offset": 0, 00:12:04.976 "data_size": 0 00:12:04.976 } 00:12:04.976 ] 00:12:04.976 }' 00:12:04.976 13:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.976 13:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 [2024-10-01 13:46:15.320635] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.235 [2024-10-01 13:46:15.320675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 [2024-10-01 13:46:15.332626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.235 [2024-10-01 13:46:15.332783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.235 [2024-10-01 13:46:15.332804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.235 [2024-10-01 13:46:15.332818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.235 [2024-10-01 13:46:15.332825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.235 [2024-10-01 13:46:15.332837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 [2024-10-01 13:46:15.388591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.235 BaseBdev1 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.235 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 [ 00:12:05.235 { 00:12:05.235 "name": "BaseBdev1", 00:12:05.235 "aliases": [ 00:12:05.235 "02d4cdf0-be3f-4bc9-b10b-f3dabddd0a72" 00:12:05.235 ], 00:12:05.235 "product_name": "Malloc disk", 00:12:05.235 "block_size": 512, 00:12:05.235 "num_blocks": 65536, 00:12:05.235 "uuid": "02d4cdf0-be3f-4bc9-b10b-f3dabddd0a72", 00:12:05.235 "assigned_rate_limits": { 00:12:05.235 "rw_ios_per_sec": 0, 00:12:05.235 "rw_mbytes_per_sec": 0, 00:12:05.235 "r_mbytes_per_sec": 0, 00:12:05.235 "w_mbytes_per_sec": 0 00:12:05.235 }, 00:12:05.235 "claimed": true, 00:12:05.235 "claim_type": "exclusive_write", 00:12:05.235 "zoned": false, 00:12:05.235 "supported_io_types": { 00:12:05.235 "read": true, 00:12:05.235 "write": true, 00:12:05.235 "unmap": true, 00:12:05.235 "flush": true, 00:12:05.235 "reset": true, 00:12:05.235 "nvme_admin": false, 00:12:05.235 "nvme_io": false, 00:12:05.495 "nvme_io_md": false, 00:12:05.495 "write_zeroes": true, 00:12:05.495 "zcopy": true, 00:12:05.495 "get_zone_info": false, 00:12:05.495 "zone_management": false, 00:12:05.495 "zone_append": false, 00:12:05.495 "compare": false, 00:12:05.495 "compare_and_write": false, 00:12:05.495 "abort": true, 00:12:05.495 "seek_hole": false, 00:12:05.495 "seek_data": false, 00:12:05.495 "copy": true, 00:12:05.495 "nvme_iov_md": false 00:12:05.495 }, 00:12:05.495 "memory_domains": [ 00:12:05.495 { 00:12:05.495 "dma_device_id": "system", 00:12:05.495 "dma_device_type": 1 00:12:05.495 }, 00:12:05.495 { 00:12:05.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.495 "dma_device_type": 2 00:12:05.495 } 00:12:05.495 ], 00:12:05.495 "driver_specific": {} 00:12:05.495 } 00:12:05.495 ] 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.495 "name": "Existed_Raid", 00:12:05.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.495 "strip_size_kb": 0, 00:12:05.495 "state": "configuring", 00:12:05.495 "raid_level": "raid1", 00:12:05.495 "superblock": false, 00:12:05.495 "num_base_bdevs": 3, 00:12:05.495 "num_base_bdevs_discovered": 1, 00:12:05.495 "num_base_bdevs_operational": 3, 00:12:05.495 "base_bdevs_list": [ 00:12:05.495 { 00:12:05.495 "name": "BaseBdev1", 00:12:05.495 "uuid": "02d4cdf0-be3f-4bc9-b10b-f3dabddd0a72", 00:12:05.495 "is_configured": true, 00:12:05.495 "data_offset": 0, 00:12:05.495 "data_size": 65536 00:12:05.495 }, 00:12:05.495 { 00:12:05.495 "name": "BaseBdev2", 00:12:05.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.495 "is_configured": false, 00:12:05.495 "data_offset": 0, 00:12:05.495 "data_size": 0 00:12:05.495 }, 00:12:05.495 { 00:12:05.495 "name": "BaseBdev3", 00:12:05.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.495 "is_configured": false, 00:12:05.495 "data_offset": 0, 00:12:05.495 "data_size": 0 00:12:05.495 } 00:12:05.495 ] 00:12:05.495 }' 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.495 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.753 [2024-10-01 13:46:15.848092] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.753 [2024-10-01 13:46:15.848150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.753 [2024-10-01 13:46:15.856113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.753 [2024-10-01 13:46:15.858205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.753 [2024-10-01 13:46:15.858250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.753 [2024-10-01 13:46:15.858261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.753 [2024-10-01 13:46:15.858273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.753 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.753 "name": "Existed_Raid", 00:12:05.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.753 "strip_size_kb": 0, 00:12:05.753 "state": "configuring", 00:12:05.753 "raid_level": "raid1", 00:12:05.753 "superblock": false, 00:12:05.753 "num_base_bdevs": 3, 00:12:05.753 "num_base_bdevs_discovered": 1, 00:12:05.753 "num_base_bdevs_operational": 3, 00:12:05.754 "base_bdevs_list": [ 00:12:05.754 { 00:12:05.754 "name": "BaseBdev1", 00:12:05.754 "uuid": "02d4cdf0-be3f-4bc9-b10b-f3dabddd0a72", 00:12:05.754 "is_configured": true, 00:12:05.754 "data_offset": 0, 00:12:05.754 "data_size": 65536 00:12:05.754 }, 00:12:05.754 { 00:12:05.754 "name": "BaseBdev2", 00:12:05.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.754 "is_configured": false, 00:12:05.754 "data_offset": 0, 00:12:05.754 "data_size": 0 00:12:05.754 }, 00:12:05.754 { 00:12:05.754 "name": "BaseBdev3", 00:12:05.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.754 "is_configured": false, 00:12:05.754 "data_offset": 0, 00:12:05.754 "data_size": 0 00:12:05.754 } 00:12:05.754 ] 00:12:05.754 }' 00:12:05.754 13:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.754 13:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.321 [2024-10-01 13:46:16.266646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.321 BaseBdev2 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.321 [ 00:12:06.321 { 00:12:06.321 "name": "BaseBdev2", 00:12:06.321 "aliases": [ 00:12:06.321 "2e1e0fe6-a36b-4331-a3fc-d702224211f1" 00:12:06.321 ], 00:12:06.321 "product_name": "Malloc disk", 00:12:06.321 "block_size": 512, 00:12:06.321 "num_blocks": 65536, 00:12:06.321 "uuid": "2e1e0fe6-a36b-4331-a3fc-d702224211f1", 00:12:06.321 "assigned_rate_limits": { 00:12:06.321 "rw_ios_per_sec": 0, 00:12:06.321 "rw_mbytes_per_sec": 0, 00:12:06.321 "r_mbytes_per_sec": 0, 00:12:06.321 "w_mbytes_per_sec": 0 00:12:06.321 }, 00:12:06.321 "claimed": true, 00:12:06.321 "claim_type": "exclusive_write", 00:12:06.321 "zoned": false, 00:12:06.321 "supported_io_types": { 00:12:06.321 "read": true, 00:12:06.321 "write": true, 00:12:06.321 "unmap": true, 00:12:06.321 "flush": true, 00:12:06.321 "reset": true, 00:12:06.321 "nvme_admin": false, 00:12:06.321 "nvme_io": false, 00:12:06.321 "nvme_io_md": false, 00:12:06.321 "write_zeroes": true, 00:12:06.321 "zcopy": true, 00:12:06.321 "get_zone_info": false, 00:12:06.321 "zone_management": false, 00:12:06.321 "zone_append": false, 00:12:06.321 "compare": false, 00:12:06.321 "compare_and_write": false, 00:12:06.321 "abort": true, 00:12:06.321 "seek_hole": false, 00:12:06.321 "seek_data": false, 00:12:06.321 "copy": true, 00:12:06.321 "nvme_iov_md": false 00:12:06.321 }, 00:12:06.321 "memory_domains": [ 00:12:06.321 { 00:12:06.321 "dma_device_id": "system", 00:12:06.321 "dma_device_type": 1 00:12:06.321 }, 00:12:06.321 { 00:12:06.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.321 "dma_device_type": 2 00:12:06.321 } 00:12:06.321 ], 00:12:06.321 "driver_specific": {} 00:12:06.321 } 00:12:06.321 ] 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.321 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.322 "name": "Existed_Raid", 00:12:06.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.322 "strip_size_kb": 0, 00:12:06.322 "state": "configuring", 00:12:06.322 "raid_level": "raid1", 00:12:06.322 "superblock": false, 00:12:06.322 "num_base_bdevs": 3, 00:12:06.322 "num_base_bdevs_discovered": 2, 00:12:06.322 "num_base_bdevs_operational": 3, 00:12:06.322 "base_bdevs_list": [ 00:12:06.322 { 00:12:06.322 "name": "BaseBdev1", 00:12:06.322 "uuid": "02d4cdf0-be3f-4bc9-b10b-f3dabddd0a72", 00:12:06.322 "is_configured": true, 00:12:06.322 "data_offset": 0, 00:12:06.322 "data_size": 65536 00:12:06.322 }, 00:12:06.322 { 00:12:06.322 "name": "BaseBdev2", 00:12:06.322 "uuid": "2e1e0fe6-a36b-4331-a3fc-d702224211f1", 00:12:06.322 "is_configured": true, 00:12:06.322 "data_offset": 0, 00:12:06.322 "data_size": 65536 00:12:06.322 }, 00:12:06.322 { 00:12:06.322 "name": "BaseBdev3", 00:12:06.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.322 "is_configured": false, 00:12:06.322 "data_offset": 0, 00:12:06.322 "data_size": 0 00:12:06.322 } 00:12:06.322 ] 00:12:06.322 }' 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.322 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.580 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.580 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.580 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 [2024-10-01 13:46:16.808733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.840 [2024-10-01 13:46:16.808990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:06.840 [2024-10-01 13:46:16.809019] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:06.840 [2024-10-01 13:46:16.809326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:06.840 [2024-10-01 13:46:16.809529] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:06.840 [2024-10-01 13:46:16.809541] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:06.840 [2024-10-01 13:46:16.809807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.840 BaseBdev3 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 [ 00:12:06.840 { 00:12:06.840 "name": "BaseBdev3", 00:12:06.840 "aliases": [ 00:12:06.840 "ceb76b9c-09d8-4cfd-b60f-85209dea1c92" 00:12:06.840 ], 00:12:06.840 "product_name": "Malloc disk", 00:12:06.840 "block_size": 512, 00:12:06.840 "num_blocks": 65536, 00:12:06.840 "uuid": "ceb76b9c-09d8-4cfd-b60f-85209dea1c92", 00:12:06.840 "assigned_rate_limits": { 00:12:06.840 "rw_ios_per_sec": 0, 00:12:06.840 "rw_mbytes_per_sec": 0, 00:12:06.840 "r_mbytes_per_sec": 0, 00:12:06.840 "w_mbytes_per_sec": 0 00:12:06.840 }, 00:12:06.840 "claimed": true, 00:12:06.840 "claim_type": "exclusive_write", 00:12:06.840 "zoned": false, 00:12:06.840 "supported_io_types": { 00:12:06.840 "read": true, 00:12:06.840 "write": true, 00:12:06.840 "unmap": true, 00:12:06.840 "flush": true, 00:12:06.840 "reset": true, 00:12:06.840 "nvme_admin": false, 00:12:06.840 "nvme_io": false, 00:12:06.840 "nvme_io_md": false, 00:12:06.840 "write_zeroes": true, 00:12:06.840 "zcopy": true, 00:12:06.840 "get_zone_info": false, 00:12:06.840 "zone_management": false, 00:12:06.840 "zone_append": false, 00:12:06.840 "compare": false, 00:12:06.840 "compare_and_write": false, 00:12:06.840 "abort": true, 00:12:06.840 "seek_hole": false, 00:12:06.840 "seek_data": false, 00:12:06.840 "copy": true, 00:12:06.840 "nvme_iov_md": false 00:12:06.840 }, 00:12:06.840 "memory_domains": [ 00:12:06.840 { 00:12:06.840 "dma_device_id": "system", 00:12:06.840 "dma_device_type": 1 00:12:06.840 }, 00:12:06.840 { 00:12:06.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.840 "dma_device_type": 2 00:12:06.840 } 00:12:06.840 ], 00:12:06.840 "driver_specific": {} 00:12:06.840 } 00:12:06.840 ] 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.840 "name": "Existed_Raid", 00:12:06.840 "uuid": "eb1ac6fe-7c6e-40b4-86e5-d93713b094ca", 00:12:06.840 "strip_size_kb": 0, 00:12:06.840 "state": "online", 00:12:06.840 "raid_level": "raid1", 00:12:06.840 "superblock": false, 00:12:06.840 "num_base_bdevs": 3, 00:12:06.840 "num_base_bdevs_discovered": 3, 00:12:06.840 "num_base_bdevs_operational": 3, 00:12:06.840 "base_bdevs_list": [ 00:12:06.840 { 00:12:06.840 "name": "BaseBdev1", 00:12:06.840 "uuid": "02d4cdf0-be3f-4bc9-b10b-f3dabddd0a72", 00:12:06.840 "is_configured": true, 00:12:06.840 "data_offset": 0, 00:12:06.840 "data_size": 65536 00:12:06.840 }, 00:12:06.840 { 00:12:06.840 "name": "BaseBdev2", 00:12:06.840 "uuid": "2e1e0fe6-a36b-4331-a3fc-d702224211f1", 00:12:06.840 "is_configured": true, 00:12:06.840 "data_offset": 0, 00:12:06.840 "data_size": 65536 00:12:06.840 }, 00:12:06.840 { 00:12:06.840 "name": "BaseBdev3", 00:12:06.840 "uuid": "ceb76b9c-09d8-4cfd-b60f-85209dea1c92", 00:12:06.840 "is_configured": true, 00:12:06.840 "data_offset": 0, 00:12:06.840 "data_size": 65536 00:12:06.840 } 00:12:06.840 ] 00:12:06.840 }' 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.840 13:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.100 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.100 [2024-10-01 13:46:17.288492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.360 "name": "Existed_Raid", 00:12:07.360 "aliases": [ 00:12:07.360 "eb1ac6fe-7c6e-40b4-86e5-d93713b094ca" 00:12:07.360 ], 00:12:07.360 "product_name": "Raid Volume", 00:12:07.360 "block_size": 512, 00:12:07.360 "num_blocks": 65536, 00:12:07.360 "uuid": "eb1ac6fe-7c6e-40b4-86e5-d93713b094ca", 00:12:07.360 "assigned_rate_limits": { 00:12:07.360 "rw_ios_per_sec": 0, 00:12:07.360 "rw_mbytes_per_sec": 0, 00:12:07.360 "r_mbytes_per_sec": 0, 00:12:07.360 "w_mbytes_per_sec": 0 00:12:07.360 }, 00:12:07.360 "claimed": false, 00:12:07.360 "zoned": false, 00:12:07.360 "supported_io_types": { 00:12:07.360 "read": true, 00:12:07.360 "write": true, 00:12:07.360 "unmap": false, 00:12:07.360 "flush": false, 00:12:07.360 "reset": true, 00:12:07.360 "nvme_admin": false, 00:12:07.360 "nvme_io": false, 00:12:07.360 "nvme_io_md": false, 00:12:07.360 "write_zeroes": true, 00:12:07.360 "zcopy": false, 00:12:07.360 "get_zone_info": false, 00:12:07.360 "zone_management": false, 00:12:07.360 "zone_append": false, 00:12:07.360 "compare": false, 00:12:07.360 "compare_and_write": false, 00:12:07.360 "abort": false, 00:12:07.360 "seek_hole": false, 00:12:07.360 "seek_data": false, 00:12:07.360 "copy": false, 00:12:07.360 "nvme_iov_md": false 00:12:07.360 }, 00:12:07.360 "memory_domains": [ 00:12:07.360 { 00:12:07.360 "dma_device_id": "system", 00:12:07.360 "dma_device_type": 1 00:12:07.360 }, 00:12:07.360 { 00:12:07.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.360 "dma_device_type": 2 00:12:07.360 }, 00:12:07.360 { 00:12:07.360 "dma_device_id": "system", 00:12:07.360 "dma_device_type": 1 00:12:07.360 }, 00:12:07.360 { 00:12:07.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.360 "dma_device_type": 2 00:12:07.360 }, 00:12:07.360 { 00:12:07.360 "dma_device_id": "system", 00:12:07.360 "dma_device_type": 1 00:12:07.360 }, 00:12:07.360 { 00:12:07.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.360 "dma_device_type": 2 00:12:07.360 } 00:12:07.360 ], 00:12:07.360 "driver_specific": { 00:12:07.360 "raid": { 00:12:07.360 "uuid": "eb1ac6fe-7c6e-40b4-86e5-d93713b094ca", 00:12:07.360 "strip_size_kb": 0, 00:12:07.360 "state": "online", 00:12:07.360 "raid_level": "raid1", 00:12:07.360 "superblock": false, 00:12:07.360 "num_base_bdevs": 3, 00:12:07.360 "num_base_bdevs_discovered": 3, 00:12:07.360 "num_base_bdevs_operational": 3, 00:12:07.360 "base_bdevs_list": [ 00:12:07.360 { 00:12:07.360 "name": "BaseBdev1", 00:12:07.360 "uuid": "02d4cdf0-be3f-4bc9-b10b-f3dabddd0a72", 00:12:07.360 "is_configured": true, 00:12:07.360 "data_offset": 0, 00:12:07.360 "data_size": 65536 00:12:07.360 }, 00:12:07.360 { 00:12:07.360 "name": "BaseBdev2", 00:12:07.360 "uuid": "2e1e0fe6-a36b-4331-a3fc-d702224211f1", 00:12:07.360 "is_configured": true, 00:12:07.360 "data_offset": 0, 00:12:07.360 "data_size": 65536 00:12:07.360 }, 00:12:07.360 { 00:12:07.360 "name": "BaseBdev3", 00:12:07.360 "uuid": "ceb76b9c-09d8-4cfd-b60f-85209dea1c92", 00:12:07.360 "is_configured": true, 00:12:07.360 "data_offset": 0, 00:12:07.360 "data_size": 65536 00:12:07.360 } 00:12:07.360 ] 00:12:07.360 } 00:12:07.360 } 00:12:07.360 }' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:07.360 BaseBdev2 00:12:07.360 BaseBdev3' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.360 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:07.361 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.361 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.361 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.361 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.620 [2024-10-01 13:46:17.563750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.620 "name": "Existed_Raid", 00:12:07.620 "uuid": "eb1ac6fe-7c6e-40b4-86e5-d93713b094ca", 00:12:07.620 "strip_size_kb": 0, 00:12:07.620 "state": "online", 00:12:07.620 "raid_level": "raid1", 00:12:07.620 "superblock": false, 00:12:07.620 "num_base_bdevs": 3, 00:12:07.620 "num_base_bdevs_discovered": 2, 00:12:07.620 "num_base_bdevs_operational": 2, 00:12:07.620 "base_bdevs_list": [ 00:12:07.620 { 00:12:07.620 "name": null, 00:12:07.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.620 "is_configured": false, 00:12:07.620 "data_offset": 0, 00:12:07.620 "data_size": 65536 00:12:07.620 }, 00:12:07.620 { 00:12:07.620 "name": "BaseBdev2", 00:12:07.620 "uuid": "2e1e0fe6-a36b-4331-a3fc-d702224211f1", 00:12:07.620 "is_configured": true, 00:12:07.620 "data_offset": 0, 00:12:07.620 "data_size": 65536 00:12:07.620 }, 00:12:07.620 { 00:12:07.620 "name": "BaseBdev3", 00:12:07.620 "uuid": "ceb76b9c-09d8-4cfd-b60f-85209dea1c92", 00:12:07.620 "is_configured": true, 00:12:07.620 "data_offset": 0, 00:12:07.620 "data_size": 65536 00:12:07.620 } 00:12:07.620 ] 00:12:07.620 }' 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.620 13:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 [2024-10-01 13:46:18.203507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.189 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.189 [2024-10-01 13:46:18.354942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.189 [2024-10-01 13:46:18.355047] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.449 [2024-10-01 13:46:18.453342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.449 [2024-10-01 13:46:18.453422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.449 [2024-10-01 13:46:18.453438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 BaseBdev2 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:08.449 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.450 [ 00:12:08.450 { 00:12:08.450 "name": "BaseBdev2", 00:12:08.450 "aliases": [ 00:12:08.450 "ace3986d-ae9a-4456-b491-8297f5e3a159" 00:12:08.450 ], 00:12:08.450 "product_name": "Malloc disk", 00:12:08.450 "block_size": 512, 00:12:08.450 "num_blocks": 65536, 00:12:08.450 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:08.450 "assigned_rate_limits": { 00:12:08.450 "rw_ios_per_sec": 0, 00:12:08.450 "rw_mbytes_per_sec": 0, 00:12:08.450 "r_mbytes_per_sec": 0, 00:12:08.450 "w_mbytes_per_sec": 0 00:12:08.450 }, 00:12:08.450 "claimed": false, 00:12:08.450 "zoned": false, 00:12:08.450 "supported_io_types": { 00:12:08.450 "read": true, 00:12:08.450 "write": true, 00:12:08.450 "unmap": true, 00:12:08.450 "flush": true, 00:12:08.450 "reset": true, 00:12:08.450 "nvme_admin": false, 00:12:08.450 "nvme_io": false, 00:12:08.450 "nvme_io_md": false, 00:12:08.450 "write_zeroes": true, 00:12:08.450 "zcopy": true, 00:12:08.450 "get_zone_info": false, 00:12:08.450 "zone_management": false, 00:12:08.450 "zone_append": false, 00:12:08.450 "compare": false, 00:12:08.450 "compare_and_write": false, 00:12:08.450 "abort": true, 00:12:08.450 "seek_hole": false, 00:12:08.450 "seek_data": false, 00:12:08.450 "copy": true, 00:12:08.450 "nvme_iov_md": false 00:12:08.450 }, 00:12:08.450 "memory_domains": [ 00:12:08.450 { 00:12:08.450 "dma_device_id": "system", 00:12:08.450 "dma_device_type": 1 00:12:08.450 }, 00:12:08.450 { 00:12:08.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.450 "dma_device_type": 2 00:12:08.450 } 00:12:08.450 ], 00:12:08.450 "driver_specific": {} 00:12:08.450 } 00:12:08.450 ] 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.450 BaseBdev3 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.450 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.726 [ 00:12:08.726 { 00:12:08.726 "name": "BaseBdev3", 00:12:08.726 "aliases": [ 00:12:08.726 "e192a52c-e27c-4810-81d1-5bdf34acbf73" 00:12:08.726 ], 00:12:08.726 "product_name": "Malloc disk", 00:12:08.726 "block_size": 512, 00:12:08.726 "num_blocks": 65536, 00:12:08.726 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:08.726 "assigned_rate_limits": { 00:12:08.726 "rw_ios_per_sec": 0, 00:12:08.726 "rw_mbytes_per_sec": 0, 00:12:08.726 "r_mbytes_per_sec": 0, 00:12:08.726 "w_mbytes_per_sec": 0 00:12:08.726 }, 00:12:08.726 "claimed": false, 00:12:08.726 "zoned": false, 00:12:08.726 "supported_io_types": { 00:12:08.726 "read": true, 00:12:08.726 "write": true, 00:12:08.726 "unmap": true, 00:12:08.726 "flush": true, 00:12:08.726 "reset": true, 00:12:08.726 "nvme_admin": false, 00:12:08.726 "nvme_io": false, 00:12:08.726 "nvme_io_md": false, 00:12:08.726 "write_zeroes": true, 00:12:08.726 "zcopy": true, 00:12:08.726 "get_zone_info": false, 00:12:08.726 "zone_management": false, 00:12:08.726 "zone_append": false, 00:12:08.726 "compare": false, 00:12:08.726 "compare_and_write": false, 00:12:08.726 "abort": true, 00:12:08.726 "seek_hole": false, 00:12:08.726 "seek_data": false, 00:12:08.726 "copy": true, 00:12:08.726 "nvme_iov_md": false 00:12:08.726 }, 00:12:08.726 "memory_domains": [ 00:12:08.726 { 00:12:08.726 "dma_device_id": "system", 00:12:08.726 "dma_device_type": 1 00:12:08.726 }, 00:12:08.726 { 00:12:08.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.726 "dma_device_type": 2 00:12:08.726 } 00:12:08.726 ], 00:12:08.726 "driver_specific": {} 00:12:08.726 } 00:12:08.726 ] 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.726 [2024-10-01 13:46:18.686124] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.726 [2024-10-01 13:46:18.686176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.726 [2024-10-01 13:46:18.686202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.726 [2024-10-01 13:46:18.688459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.726 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.727 "name": "Existed_Raid", 00:12:08.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.727 "strip_size_kb": 0, 00:12:08.727 "state": "configuring", 00:12:08.727 "raid_level": "raid1", 00:12:08.727 "superblock": false, 00:12:08.727 "num_base_bdevs": 3, 00:12:08.727 "num_base_bdevs_discovered": 2, 00:12:08.727 "num_base_bdevs_operational": 3, 00:12:08.727 "base_bdevs_list": [ 00:12:08.727 { 00:12:08.727 "name": "BaseBdev1", 00:12:08.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.727 "is_configured": false, 00:12:08.727 "data_offset": 0, 00:12:08.727 "data_size": 0 00:12:08.727 }, 00:12:08.727 { 00:12:08.727 "name": "BaseBdev2", 00:12:08.727 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:08.727 "is_configured": true, 00:12:08.727 "data_offset": 0, 00:12:08.727 "data_size": 65536 00:12:08.727 }, 00:12:08.727 { 00:12:08.727 "name": "BaseBdev3", 00:12:08.727 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:08.727 "is_configured": true, 00:12:08.727 "data_offset": 0, 00:12:08.727 "data_size": 65536 00:12:08.727 } 00:12:08.727 ] 00:12:08.727 }' 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.727 13:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.986 [2024-10-01 13:46:19.125500] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.986 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.245 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.245 "name": "Existed_Raid", 00:12:09.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.245 "strip_size_kb": 0, 00:12:09.245 "state": "configuring", 00:12:09.245 "raid_level": "raid1", 00:12:09.245 "superblock": false, 00:12:09.245 "num_base_bdevs": 3, 00:12:09.245 "num_base_bdevs_discovered": 1, 00:12:09.245 "num_base_bdevs_operational": 3, 00:12:09.245 "base_bdevs_list": [ 00:12:09.245 { 00:12:09.245 "name": "BaseBdev1", 00:12:09.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.245 "is_configured": false, 00:12:09.245 "data_offset": 0, 00:12:09.245 "data_size": 0 00:12:09.245 }, 00:12:09.245 { 00:12:09.245 "name": null, 00:12:09.245 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:09.245 "is_configured": false, 00:12:09.245 "data_offset": 0, 00:12:09.245 "data_size": 65536 00:12:09.245 }, 00:12:09.245 { 00:12:09.245 "name": "BaseBdev3", 00:12:09.245 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:09.245 "is_configured": true, 00:12:09.245 "data_offset": 0, 00:12:09.245 "data_size": 65536 00:12:09.245 } 00:12:09.245 ] 00:12:09.245 }' 00:12:09.245 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.245 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.504 [2024-10-01 13:46:19.631552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.504 BaseBdev1 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.504 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.504 [ 00:12:09.504 { 00:12:09.504 "name": "BaseBdev1", 00:12:09.504 "aliases": [ 00:12:09.504 "b8d2fe7f-a077-482f-a764-4bf46e24ae29" 00:12:09.504 ], 00:12:09.504 "product_name": "Malloc disk", 00:12:09.504 "block_size": 512, 00:12:09.504 "num_blocks": 65536, 00:12:09.504 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:09.504 "assigned_rate_limits": { 00:12:09.504 "rw_ios_per_sec": 0, 00:12:09.504 "rw_mbytes_per_sec": 0, 00:12:09.505 "r_mbytes_per_sec": 0, 00:12:09.505 "w_mbytes_per_sec": 0 00:12:09.505 }, 00:12:09.505 "claimed": true, 00:12:09.505 "claim_type": "exclusive_write", 00:12:09.505 "zoned": false, 00:12:09.505 "supported_io_types": { 00:12:09.505 "read": true, 00:12:09.505 "write": true, 00:12:09.505 "unmap": true, 00:12:09.505 "flush": true, 00:12:09.505 "reset": true, 00:12:09.505 "nvme_admin": false, 00:12:09.505 "nvme_io": false, 00:12:09.505 "nvme_io_md": false, 00:12:09.505 "write_zeroes": true, 00:12:09.505 "zcopy": true, 00:12:09.505 "get_zone_info": false, 00:12:09.505 "zone_management": false, 00:12:09.505 "zone_append": false, 00:12:09.505 "compare": false, 00:12:09.505 "compare_and_write": false, 00:12:09.505 "abort": true, 00:12:09.505 "seek_hole": false, 00:12:09.505 "seek_data": false, 00:12:09.505 "copy": true, 00:12:09.505 "nvme_iov_md": false 00:12:09.505 }, 00:12:09.505 "memory_domains": [ 00:12:09.505 { 00:12:09.505 "dma_device_id": "system", 00:12:09.505 "dma_device_type": 1 00:12:09.505 }, 00:12:09.505 { 00:12:09.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.505 "dma_device_type": 2 00:12:09.505 } 00:12:09.505 ], 00:12:09.505 "driver_specific": {} 00:12:09.505 } 00:12:09.505 ] 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.505 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.765 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.765 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.765 "name": "Existed_Raid", 00:12:09.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.765 "strip_size_kb": 0, 00:12:09.765 "state": "configuring", 00:12:09.765 "raid_level": "raid1", 00:12:09.765 "superblock": false, 00:12:09.765 "num_base_bdevs": 3, 00:12:09.765 "num_base_bdevs_discovered": 2, 00:12:09.765 "num_base_bdevs_operational": 3, 00:12:09.765 "base_bdevs_list": [ 00:12:09.765 { 00:12:09.765 "name": "BaseBdev1", 00:12:09.765 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:09.765 "is_configured": true, 00:12:09.765 "data_offset": 0, 00:12:09.765 "data_size": 65536 00:12:09.765 }, 00:12:09.765 { 00:12:09.765 "name": null, 00:12:09.765 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:09.765 "is_configured": false, 00:12:09.765 "data_offset": 0, 00:12:09.765 "data_size": 65536 00:12:09.765 }, 00:12:09.765 { 00:12:09.765 "name": "BaseBdev3", 00:12:09.765 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:09.765 "is_configured": true, 00:12:09.765 "data_offset": 0, 00:12:09.765 "data_size": 65536 00:12:09.765 } 00:12:09.765 ] 00:12:09.765 }' 00:12:09.765 13:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.765 13:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.024 [2024-10-01 13:46:20.155497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:10.024 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.025 "name": "Existed_Raid", 00:12:10.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.025 "strip_size_kb": 0, 00:12:10.025 "state": "configuring", 00:12:10.025 "raid_level": "raid1", 00:12:10.025 "superblock": false, 00:12:10.025 "num_base_bdevs": 3, 00:12:10.025 "num_base_bdevs_discovered": 1, 00:12:10.025 "num_base_bdevs_operational": 3, 00:12:10.025 "base_bdevs_list": [ 00:12:10.025 { 00:12:10.025 "name": "BaseBdev1", 00:12:10.025 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:10.025 "is_configured": true, 00:12:10.025 "data_offset": 0, 00:12:10.025 "data_size": 65536 00:12:10.025 }, 00:12:10.025 { 00:12:10.025 "name": null, 00:12:10.025 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:10.025 "is_configured": false, 00:12:10.025 "data_offset": 0, 00:12:10.025 "data_size": 65536 00:12:10.025 }, 00:12:10.025 { 00:12:10.025 "name": null, 00:12:10.025 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:10.025 "is_configured": false, 00:12:10.025 "data_offset": 0, 00:12:10.025 "data_size": 65536 00:12:10.025 } 00:12:10.025 ] 00:12:10.025 }' 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.025 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.592 [2024-10-01 13:46:20.627482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.592 "name": "Existed_Raid", 00:12:10.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.592 "strip_size_kb": 0, 00:12:10.592 "state": "configuring", 00:12:10.592 "raid_level": "raid1", 00:12:10.592 "superblock": false, 00:12:10.592 "num_base_bdevs": 3, 00:12:10.592 "num_base_bdevs_discovered": 2, 00:12:10.592 "num_base_bdevs_operational": 3, 00:12:10.592 "base_bdevs_list": [ 00:12:10.592 { 00:12:10.592 "name": "BaseBdev1", 00:12:10.592 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:10.592 "is_configured": true, 00:12:10.592 "data_offset": 0, 00:12:10.592 "data_size": 65536 00:12:10.592 }, 00:12:10.592 { 00:12:10.592 "name": null, 00:12:10.592 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:10.592 "is_configured": false, 00:12:10.592 "data_offset": 0, 00:12:10.592 "data_size": 65536 00:12:10.592 }, 00:12:10.592 { 00:12:10.592 "name": "BaseBdev3", 00:12:10.592 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:10.592 "is_configured": true, 00:12:10.592 "data_offset": 0, 00:12:10.592 "data_size": 65536 00:12:10.592 } 00:12:10.592 ] 00:12:10.592 }' 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.592 13:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.851 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.851 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.851 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.851 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.110 [2024-10-01 13:46:21.087562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.110 "name": "Existed_Raid", 00:12:11.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.110 "strip_size_kb": 0, 00:12:11.110 "state": "configuring", 00:12:11.110 "raid_level": "raid1", 00:12:11.110 "superblock": false, 00:12:11.110 "num_base_bdevs": 3, 00:12:11.110 "num_base_bdevs_discovered": 1, 00:12:11.110 "num_base_bdevs_operational": 3, 00:12:11.110 "base_bdevs_list": [ 00:12:11.110 { 00:12:11.110 "name": null, 00:12:11.110 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:11.110 "is_configured": false, 00:12:11.110 "data_offset": 0, 00:12:11.110 "data_size": 65536 00:12:11.110 }, 00:12:11.110 { 00:12:11.110 "name": null, 00:12:11.110 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:11.110 "is_configured": false, 00:12:11.110 "data_offset": 0, 00:12:11.110 "data_size": 65536 00:12:11.110 }, 00:12:11.110 { 00:12:11.110 "name": "BaseBdev3", 00:12:11.110 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:11.110 "is_configured": true, 00:12:11.110 "data_offset": 0, 00:12:11.110 "data_size": 65536 00:12:11.110 } 00:12:11.110 ] 00:12:11.110 }' 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.110 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.679 [2024-10-01 13:46:21.668514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.679 "name": "Existed_Raid", 00:12:11.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.679 "strip_size_kb": 0, 00:12:11.679 "state": "configuring", 00:12:11.679 "raid_level": "raid1", 00:12:11.679 "superblock": false, 00:12:11.679 "num_base_bdevs": 3, 00:12:11.679 "num_base_bdevs_discovered": 2, 00:12:11.679 "num_base_bdevs_operational": 3, 00:12:11.679 "base_bdevs_list": [ 00:12:11.679 { 00:12:11.679 "name": null, 00:12:11.679 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:11.679 "is_configured": false, 00:12:11.679 "data_offset": 0, 00:12:11.679 "data_size": 65536 00:12:11.679 }, 00:12:11.679 { 00:12:11.679 "name": "BaseBdev2", 00:12:11.679 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:11.679 "is_configured": true, 00:12:11.679 "data_offset": 0, 00:12:11.679 "data_size": 65536 00:12:11.679 }, 00:12:11.679 { 00:12:11.679 "name": "BaseBdev3", 00:12:11.679 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:11.679 "is_configured": true, 00:12:11.679 "data_offset": 0, 00:12:11.679 "data_size": 65536 00:12:11.679 } 00:12:11.679 ] 00:12:11.679 }' 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.679 13:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.939 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b8d2fe7f-a077-482f-a764-4bf46e24ae29 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.198 [2024-10-01 13:46:22.194198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:12.198 [2024-10-01 13:46:22.194256] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:12.198 [2024-10-01 13:46:22.194266] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:12.198 [2024-10-01 13:46:22.194549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:12.198 [2024-10-01 13:46:22.194700] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:12.198 [2024-10-01 13:46:22.194715] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:12.198 [2024-10-01 13:46:22.194961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.198 NewBaseBdev 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:12.198 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.199 [ 00:12:12.199 { 00:12:12.199 "name": "NewBaseBdev", 00:12:12.199 "aliases": [ 00:12:12.199 "b8d2fe7f-a077-482f-a764-4bf46e24ae29" 00:12:12.199 ], 00:12:12.199 "product_name": "Malloc disk", 00:12:12.199 "block_size": 512, 00:12:12.199 "num_blocks": 65536, 00:12:12.199 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:12.199 "assigned_rate_limits": { 00:12:12.199 "rw_ios_per_sec": 0, 00:12:12.199 "rw_mbytes_per_sec": 0, 00:12:12.199 "r_mbytes_per_sec": 0, 00:12:12.199 "w_mbytes_per_sec": 0 00:12:12.199 }, 00:12:12.199 "claimed": true, 00:12:12.199 "claim_type": "exclusive_write", 00:12:12.199 "zoned": false, 00:12:12.199 "supported_io_types": { 00:12:12.199 "read": true, 00:12:12.199 "write": true, 00:12:12.199 "unmap": true, 00:12:12.199 "flush": true, 00:12:12.199 "reset": true, 00:12:12.199 "nvme_admin": false, 00:12:12.199 "nvme_io": false, 00:12:12.199 "nvme_io_md": false, 00:12:12.199 "write_zeroes": true, 00:12:12.199 "zcopy": true, 00:12:12.199 "get_zone_info": false, 00:12:12.199 "zone_management": false, 00:12:12.199 "zone_append": false, 00:12:12.199 "compare": false, 00:12:12.199 "compare_and_write": false, 00:12:12.199 "abort": true, 00:12:12.199 "seek_hole": false, 00:12:12.199 "seek_data": false, 00:12:12.199 "copy": true, 00:12:12.199 "nvme_iov_md": false 00:12:12.199 }, 00:12:12.199 "memory_domains": [ 00:12:12.199 { 00:12:12.199 "dma_device_id": "system", 00:12:12.199 "dma_device_type": 1 00:12:12.199 }, 00:12:12.199 { 00:12:12.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.199 "dma_device_type": 2 00:12:12.199 } 00:12:12.199 ], 00:12:12.199 "driver_specific": {} 00:12:12.199 } 00:12:12.199 ] 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.199 "name": "Existed_Raid", 00:12:12.199 "uuid": "73d5798c-49ca-463e-8100-b06227393776", 00:12:12.199 "strip_size_kb": 0, 00:12:12.199 "state": "online", 00:12:12.199 "raid_level": "raid1", 00:12:12.199 "superblock": false, 00:12:12.199 "num_base_bdevs": 3, 00:12:12.199 "num_base_bdevs_discovered": 3, 00:12:12.199 "num_base_bdevs_operational": 3, 00:12:12.199 "base_bdevs_list": [ 00:12:12.199 { 00:12:12.199 "name": "NewBaseBdev", 00:12:12.199 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:12.199 "is_configured": true, 00:12:12.199 "data_offset": 0, 00:12:12.199 "data_size": 65536 00:12:12.199 }, 00:12:12.199 { 00:12:12.199 "name": "BaseBdev2", 00:12:12.199 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:12.199 "is_configured": true, 00:12:12.199 "data_offset": 0, 00:12:12.199 "data_size": 65536 00:12:12.199 }, 00:12:12.199 { 00:12:12.199 "name": "BaseBdev3", 00:12:12.199 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:12.199 "is_configured": true, 00:12:12.199 "data_offset": 0, 00:12:12.199 "data_size": 65536 00:12:12.199 } 00:12:12.199 ] 00:12:12.199 }' 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.199 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.467 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.467 [2024-10-01 13:46:22.657975] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.727 "name": "Existed_Raid", 00:12:12.727 "aliases": [ 00:12:12.727 "73d5798c-49ca-463e-8100-b06227393776" 00:12:12.727 ], 00:12:12.727 "product_name": "Raid Volume", 00:12:12.727 "block_size": 512, 00:12:12.727 "num_blocks": 65536, 00:12:12.727 "uuid": "73d5798c-49ca-463e-8100-b06227393776", 00:12:12.727 "assigned_rate_limits": { 00:12:12.727 "rw_ios_per_sec": 0, 00:12:12.727 "rw_mbytes_per_sec": 0, 00:12:12.727 "r_mbytes_per_sec": 0, 00:12:12.727 "w_mbytes_per_sec": 0 00:12:12.727 }, 00:12:12.727 "claimed": false, 00:12:12.727 "zoned": false, 00:12:12.727 "supported_io_types": { 00:12:12.727 "read": true, 00:12:12.727 "write": true, 00:12:12.727 "unmap": false, 00:12:12.727 "flush": false, 00:12:12.727 "reset": true, 00:12:12.727 "nvme_admin": false, 00:12:12.727 "nvme_io": false, 00:12:12.727 "nvme_io_md": false, 00:12:12.727 "write_zeroes": true, 00:12:12.727 "zcopy": false, 00:12:12.727 "get_zone_info": false, 00:12:12.727 "zone_management": false, 00:12:12.727 "zone_append": false, 00:12:12.727 "compare": false, 00:12:12.727 "compare_and_write": false, 00:12:12.727 "abort": false, 00:12:12.727 "seek_hole": false, 00:12:12.727 "seek_data": false, 00:12:12.727 "copy": false, 00:12:12.727 "nvme_iov_md": false 00:12:12.727 }, 00:12:12.727 "memory_domains": [ 00:12:12.727 { 00:12:12.727 "dma_device_id": "system", 00:12:12.727 "dma_device_type": 1 00:12:12.727 }, 00:12:12.727 { 00:12:12.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.727 "dma_device_type": 2 00:12:12.727 }, 00:12:12.727 { 00:12:12.727 "dma_device_id": "system", 00:12:12.727 "dma_device_type": 1 00:12:12.727 }, 00:12:12.727 { 00:12:12.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.727 "dma_device_type": 2 00:12:12.727 }, 00:12:12.727 { 00:12:12.727 "dma_device_id": "system", 00:12:12.727 "dma_device_type": 1 00:12:12.727 }, 00:12:12.727 { 00:12:12.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.727 "dma_device_type": 2 00:12:12.727 } 00:12:12.727 ], 00:12:12.727 "driver_specific": { 00:12:12.727 "raid": { 00:12:12.727 "uuid": "73d5798c-49ca-463e-8100-b06227393776", 00:12:12.727 "strip_size_kb": 0, 00:12:12.727 "state": "online", 00:12:12.727 "raid_level": "raid1", 00:12:12.727 "superblock": false, 00:12:12.727 "num_base_bdevs": 3, 00:12:12.727 "num_base_bdevs_discovered": 3, 00:12:12.727 "num_base_bdevs_operational": 3, 00:12:12.727 "base_bdevs_list": [ 00:12:12.727 { 00:12:12.727 "name": "NewBaseBdev", 00:12:12.727 "uuid": "b8d2fe7f-a077-482f-a764-4bf46e24ae29", 00:12:12.727 "is_configured": true, 00:12:12.727 "data_offset": 0, 00:12:12.727 "data_size": 65536 00:12:12.727 }, 00:12:12.727 { 00:12:12.727 "name": "BaseBdev2", 00:12:12.727 "uuid": "ace3986d-ae9a-4456-b491-8297f5e3a159", 00:12:12.727 "is_configured": true, 00:12:12.727 "data_offset": 0, 00:12:12.727 "data_size": 65536 00:12:12.727 }, 00:12:12.727 { 00:12:12.727 "name": "BaseBdev3", 00:12:12.727 "uuid": "e192a52c-e27c-4810-81d1-5bdf34acbf73", 00:12:12.727 "is_configured": true, 00:12:12.727 "data_offset": 0, 00:12:12.727 "data_size": 65536 00:12:12.727 } 00:12:12.727 ] 00:12:12.727 } 00:12:12.727 } 00:12:12.727 }' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:12.727 BaseBdev2 00:12:12.727 BaseBdev3' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.727 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.728 [2024-10-01 13:46:22.909365] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.728 [2024-10-01 13:46:22.909413] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.728 [2024-10-01 13:46:22.909489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.728 [2024-10-01 13:46:22.909769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.728 [2024-10-01 13:46:22.909789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67298 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67298 ']' 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67298 00:12:12.728 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:12.987 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.987 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67298 00:12:12.987 killing process with pid 67298 00:12:12.987 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.987 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.987 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67298' 00:12:12.987 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67298 00:12:12.987 [2024-10-01 13:46:22.959713] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.987 13:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67298 00:12:13.246 [2024-10-01 13:46:23.264618] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.623 13:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:14.624 00:12:14.624 real 0m10.618s 00:12:14.624 user 0m16.718s 00:12:14.624 sys 0m2.057s 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.624 ************************************ 00:12:14.624 END TEST raid_state_function_test 00:12:14.624 ************************************ 00:12:14.624 13:46:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:14.624 13:46:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:14.624 13:46:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.624 13:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.624 ************************************ 00:12:14.624 START TEST raid_state_function_test_sb 00:12:14.624 ************************************ 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67918 00:12:14.624 Process raid pid: 67918 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67918' 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67918 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 67918 ']' 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.624 13:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.624 [2024-10-01 13:46:24.710203] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:14.624 [2024-10-01 13:46:24.710334] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.883 [2024-10-01 13:46:24.874819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.142 [2024-10-01 13:46:25.092381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.142 [2024-10-01 13:46:25.315639] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.142 [2024-10-01 13:46:25.315690] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.402 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.402 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:15.402 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:15.402 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.402 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.402 [2024-10-01 13:46:25.545274] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.402 [2024-10-01 13:46:25.545334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.403 [2024-10-01 13:46:25.545348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.403 [2024-10-01 13:46:25.545361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.403 [2024-10-01 13:46:25.545369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.403 [2024-10-01 13:46:25.545382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.403 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.662 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.662 "name": "Existed_Raid", 00:12:15.662 "uuid": "dce8a542-0e20-4ca3-ba53-4d9bd9d9f24c", 00:12:15.662 "strip_size_kb": 0, 00:12:15.662 "state": "configuring", 00:12:15.662 "raid_level": "raid1", 00:12:15.662 "superblock": true, 00:12:15.662 "num_base_bdevs": 3, 00:12:15.662 "num_base_bdevs_discovered": 0, 00:12:15.662 "num_base_bdevs_operational": 3, 00:12:15.662 "base_bdevs_list": [ 00:12:15.662 { 00:12:15.662 "name": "BaseBdev1", 00:12:15.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.662 "is_configured": false, 00:12:15.662 "data_offset": 0, 00:12:15.662 "data_size": 0 00:12:15.662 }, 00:12:15.662 { 00:12:15.662 "name": "BaseBdev2", 00:12:15.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.662 "is_configured": false, 00:12:15.662 "data_offset": 0, 00:12:15.662 "data_size": 0 00:12:15.662 }, 00:12:15.662 { 00:12:15.662 "name": "BaseBdev3", 00:12:15.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.662 "is_configured": false, 00:12:15.662 "data_offset": 0, 00:12:15.662 "data_size": 0 00:12:15.662 } 00:12:15.662 ] 00:12:15.662 }' 00:12:15.662 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.663 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.921 [2024-10-01 13:46:25.952632] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.921 [2024-10-01 13:46:25.952678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.921 [2024-10-01 13:46:25.960665] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.921 [2024-10-01 13:46:25.960720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.921 [2024-10-01 13:46:25.960731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.921 [2024-10-01 13:46:25.960745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.921 [2024-10-01 13:46:25.960752] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.921 [2024-10-01 13:46:25.960764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.921 13:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.921 [2024-10-01 13:46:26.021799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.921 BaseBdev1 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.921 [ 00:12:15.921 { 00:12:15.921 "name": "BaseBdev1", 00:12:15.921 "aliases": [ 00:12:15.921 "16791224-f035-4f64-9e2c-03f0c033a0d1" 00:12:15.921 ], 00:12:15.921 "product_name": "Malloc disk", 00:12:15.921 "block_size": 512, 00:12:15.921 "num_blocks": 65536, 00:12:15.921 "uuid": "16791224-f035-4f64-9e2c-03f0c033a0d1", 00:12:15.921 "assigned_rate_limits": { 00:12:15.921 "rw_ios_per_sec": 0, 00:12:15.921 "rw_mbytes_per_sec": 0, 00:12:15.921 "r_mbytes_per_sec": 0, 00:12:15.921 "w_mbytes_per_sec": 0 00:12:15.921 }, 00:12:15.921 "claimed": true, 00:12:15.921 "claim_type": "exclusive_write", 00:12:15.921 "zoned": false, 00:12:15.921 "supported_io_types": { 00:12:15.921 "read": true, 00:12:15.921 "write": true, 00:12:15.921 "unmap": true, 00:12:15.921 "flush": true, 00:12:15.921 "reset": true, 00:12:15.921 "nvme_admin": false, 00:12:15.921 "nvme_io": false, 00:12:15.921 "nvme_io_md": false, 00:12:15.921 "write_zeroes": true, 00:12:15.921 "zcopy": true, 00:12:15.921 "get_zone_info": false, 00:12:15.921 "zone_management": false, 00:12:15.921 "zone_append": false, 00:12:15.921 "compare": false, 00:12:15.921 "compare_and_write": false, 00:12:15.921 "abort": true, 00:12:15.921 "seek_hole": false, 00:12:15.921 "seek_data": false, 00:12:15.921 "copy": true, 00:12:15.921 "nvme_iov_md": false 00:12:15.921 }, 00:12:15.921 "memory_domains": [ 00:12:15.921 { 00:12:15.921 "dma_device_id": "system", 00:12:15.921 "dma_device_type": 1 00:12:15.921 }, 00:12:15.921 { 00:12:15.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.921 "dma_device_type": 2 00:12:15.921 } 00:12:15.921 ], 00:12:15.921 "driver_specific": {} 00:12:15.921 } 00:12:15.921 ] 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.921 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.922 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.922 "name": "Existed_Raid", 00:12:15.922 "uuid": "9f71e883-76cc-421d-a6d9-bc3fa946cc8f", 00:12:15.922 "strip_size_kb": 0, 00:12:15.922 "state": "configuring", 00:12:15.922 "raid_level": "raid1", 00:12:15.922 "superblock": true, 00:12:15.922 "num_base_bdevs": 3, 00:12:15.922 "num_base_bdevs_discovered": 1, 00:12:15.922 "num_base_bdevs_operational": 3, 00:12:15.922 "base_bdevs_list": [ 00:12:15.922 { 00:12:15.922 "name": "BaseBdev1", 00:12:15.922 "uuid": "16791224-f035-4f64-9e2c-03f0c033a0d1", 00:12:15.922 "is_configured": true, 00:12:15.922 "data_offset": 2048, 00:12:15.922 "data_size": 63488 00:12:15.922 }, 00:12:15.922 { 00:12:15.922 "name": "BaseBdev2", 00:12:15.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.922 "is_configured": false, 00:12:15.922 "data_offset": 0, 00:12:15.922 "data_size": 0 00:12:15.922 }, 00:12:15.922 { 00:12:15.922 "name": "BaseBdev3", 00:12:15.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.922 "is_configured": false, 00:12:15.922 "data_offset": 0, 00:12:15.922 "data_size": 0 00:12:15.922 } 00:12:15.922 ] 00:12:15.922 }' 00:12:15.922 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.922 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 [2024-10-01 13:46:26.477235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.490 [2024-10-01 13:46:26.477299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 [2024-10-01 13:46:26.485287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.490 [2024-10-01 13:46:26.487570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.490 [2024-10-01 13:46:26.487621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.490 [2024-10-01 13:46:26.487632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.490 [2024-10-01 13:46:26.487645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.490 "name": "Existed_Raid", 00:12:16.490 "uuid": "627d27d6-82a8-4c8e-9167-e89e83260518", 00:12:16.490 "strip_size_kb": 0, 00:12:16.490 "state": "configuring", 00:12:16.490 "raid_level": "raid1", 00:12:16.490 "superblock": true, 00:12:16.490 "num_base_bdevs": 3, 00:12:16.490 "num_base_bdevs_discovered": 1, 00:12:16.490 "num_base_bdevs_operational": 3, 00:12:16.490 "base_bdevs_list": [ 00:12:16.490 { 00:12:16.490 "name": "BaseBdev1", 00:12:16.490 "uuid": "16791224-f035-4f64-9e2c-03f0c033a0d1", 00:12:16.490 "is_configured": true, 00:12:16.490 "data_offset": 2048, 00:12:16.490 "data_size": 63488 00:12:16.490 }, 00:12:16.490 { 00:12:16.490 "name": "BaseBdev2", 00:12:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.490 "is_configured": false, 00:12:16.490 "data_offset": 0, 00:12:16.490 "data_size": 0 00:12:16.490 }, 00:12:16.490 { 00:12:16.490 "name": "BaseBdev3", 00:12:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.490 "is_configured": false, 00:12:16.490 "data_offset": 0, 00:12:16.490 "data_size": 0 00:12:16.490 } 00:12:16.490 ] 00:12:16.490 }' 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.490 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.750 [2024-10-01 13:46:26.918185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.750 BaseBdev2 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.750 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.030 [ 00:12:17.030 { 00:12:17.030 "name": "BaseBdev2", 00:12:17.030 "aliases": [ 00:12:17.030 "9fdaa991-8fd2-47a1-b108-fcafaff6f546" 00:12:17.030 ], 00:12:17.030 "product_name": "Malloc disk", 00:12:17.030 "block_size": 512, 00:12:17.030 "num_blocks": 65536, 00:12:17.030 "uuid": "9fdaa991-8fd2-47a1-b108-fcafaff6f546", 00:12:17.030 "assigned_rate_limits": { 00:12:17.030 "rw_ios_per_sec": 0, 00:12:17.030 "rw_mbytes_per_sec": 0, 00:12:17.030 "r_mbytes_per_sec": 0, 00:12:17.030 "w_mbytes_per_sec": 0 00:12:17.030 }, 00:12:17.030 "claimed": true, 00:12:17.030 "claim_type": "exclusive_write", 00:12:17.030 "zoned": false, 00:12:17.030 "supported_io_types": { 00:12:17.030 "read": true, 00:12:17.030 "write": true, 00:12:17.030 "unmap": true, 00:12:17.030 "flush": true, 00:12:17.030 "reset": true, 00:12:17.030 "nvme_admin": false, 00:12:17.030 "nvme_io": false, 00:12:17.030 "nvme_io_md": false, 00:12:17.030 "write_zeroes": true, 00:12:17.030 "zcopy": true, 00:12:17.030 "get_zone_info": false, 00:12:17.030 "zone_management": false, 00:12:17.030 "zone_append": false, 00:12:17.030 "compare": false, 00:12:17.030 "compare_and_write": false, 00:12:17.030 "abort": true, 00:12:17.030 "seek_hole": false, 00:12:17.030 "seek_data": false, 00:12:17.030 "copy": true, 00:12:17.030 "nvme_iov_md": false 00:12:17.030 }, 00:12:17.030 "memory_domains": [ 00:12:17.030 { 00:12:17.030 "dma_device_id": "system", 00:12:17.030 "dma_device_type": 1 00:12:17.030 }, 00:12:17.030 { 00:12:17.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.030 "dma_device_type": 2 00:12:17.030 } 00:12:17.030 ], 00:12:17.030 "driver_specific": {} 00:12:17.030 } 00:12:17.030 ] 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.030 13:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.030 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.030 "name": "Existed_Raid", 00:12:17.030 "uuid": "627d27d6-82a8-4c8e-9167-e89e83260518", 00:12:17.030 "strip_size_kb": 0, 00:12:17.030 "state": "configuring", 00:12:17.030 "raid_level": "raid1", 00:12:17.030 "superblock": true, 00:12:17.030 "num_base_bdevs": 3, 00:12:17.030 "num_base_bdevs_discovered": 2, 00:12:17.030 "num_base_bdevs_operational": 3, 00:12:17.030 "base_bdevs_list": [ 00:12:17.030 { 00:12:17.030 "name": "BaseBdev1", 00:12:17.030 "uuid": "16791224-f035-4f64-9e2c-03f0c033a0d1", 00:12:17.030 "is_configured": true, 00:12:17.030 "data_offset": 2048, 00:12:17.030 "data_size": 63488 00:12:17.030 }, 00:12:17.030 { 00:12:17.030 "name": "BaseBdev2", 00:12:17.030 "uuid": "9fdaa991-8fd2-47a1-b108-fcafaff6f546", 00:12:17.030 "is_configured": true, 00:12:17.030 "data_offset": 2048, 00:12:17.030 "data_size": 63488 00:12:17.030 }, 00:12:17.030 { 00:12:17.030 "name": "BaseBdev3", 00:12:17.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.030 "is_configured": false, 00:12:17.030 "data_offset": 0, 00:12:17.030 "data_size": 0 00:12:17.030 } 00:12:17.030 ] 00:12:17.030 }' 00:12:17.030 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.031 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 [2024-10-01 13:46:27.438199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.300 [2024-10-01 13:46:27.438489] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:17.300 [2024-10-01 13:46:27.438514] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.300 [2024-10-01 13:46:27.438802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:17.300 [2024-10-01 13:46:27.438959] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:17.300 [2024-10-01 13:46:27.438980] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:17.300 BaseBdev3 00:12:17.300 [2024-10-01 13:46:27.439131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 [ 00:12:17.300 { 00:12:17.300 "name": "BaseBdev3", 00:12:17.300 "aliases": [ 00:12:17.300 "5516524e-7f39-46e4-8228-c77a778292c3" 00:12:17.300 ], 00:12:17.300 "product_name": "Malloc disk", 00:12:17.300 "block_size": 512, 00:12:17.300 "num_blocks": 65536, 00:12:17.300 "uuid": "5516524e-7f39-46e4-8228-c77a778292c3", 00:12:17.300 "assigned_rate_limits": { 00:12:17.300 "rw_ios_per_sec": 0, 00:12:17.300 "rw_mbytes_per_sec": 0, 00:12:17.300 "r_mbytes_per_sec": 0, 00:12:17.300 "w_mbytes_per_sec": 0 00:12:17.300 }, 00:12:17.300 "claimed": true, 00:12:17.300 "claim_type": "exclusive_write", 00:12:17.300 "zoned": false, 00:12:17.300 "supported_io_types": { 00:12:17.300 "read": true, 00:12:17.300 "write": true, 00:12:17.300 "unmap": true, 00:12:17.300 "flush": true, 00:12:17.300 "reset": true, 00:12:17.300 "nvme_admin": false, 00:12:17.300 "nvme_io": false, 00:12:17.300 "nvme_io_md": false, 00:12:17.300 "write_zeroes": true, 00:12:17.300 "zcopy": true, 00:12:17.300 "get_zone_info": false, 00:12:17.300 "zone_management": false, 00:12:17.300 "zone_append": false, 00:12:17.300 "compare": false, 00:12:17.300 "compare_and_write": false, 00:12:17.300 "abort": true, 00:12:17.300 "seek_hole": false, 00:12:17.300 "seek_data": false, 00:12:17.300 "copy": true, 00:12:17.300 "nvme_iov_md": false 00:12:17.300 }, 00:12:17.300 "memory_domains": [ 00:12:17.300 { 00:12:17.300 "dma_device_id": "system", 00:12:17.300 "dma_device_type": 1 00:12:17.300 }, 00:12:17.300 { 00:12:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.300 "dma_device_type": 2 00:12:17.300 } 00:12:17.300 ], 00:12:17.300 "driver_specific": {} 00:12:17.300 } 00:12:17.300 ] 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.559 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.559 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.559 "name": "Existed_Raid", 00:12:17.559 "uuid": "627d27d6-82a8-4c8e-9167-e89e83260518", 00:12:17.559 "strip_size_kb": 0, 00:12:17.559 "state": "online", 00:12:17.559 "raid_level": "raid1", 00:12:17.559 "superblock": true, 00:12:17.559 "num_base_bdevs": 3, 00:12:17.559 "num_base_bdevs_discovered": 3, 00:12:17.559 "num_base_bdevs_operational": 3, 00:12:17.559 "base_bdevs_list": [ 00:12:17.559 { 00:12:17.560 "name": "BaseBdev1", 00:12:17.560 "uuid": "16791224-f035-4f64-9e2c-03f0c033a0d1", 00:12:17.560 "is_configured": true, 00:12:17.560 "data_offset": 2048, 00:12:17.560 "data_size": 63488 00:12:17.560 }, 00:12:17.560 { 00:12:17.560 "name": "BaseBdev2", 00:12:17.560 "uuid": "9fdaa991-8fd2-47a1-b108-fcafaff6f546", 00:12:17.560 "is_configured": true, 00:12:17.560 "data_offset": 2048, 00:12:17.560 "data_size": 63488 00:12:17.560 }, 00:12:17.560 { 00:12:17.560 "name": "BaseBdev3", 00:12:17.560 "uuid": "5516524e-7f39-46e4-8228-c77a778292c3", 00:12:17.560 "is_configured": true, 00:12:17.560 "data_offset": 2048, 00:12:17.560 "data_size": 63488 00:12:17.560 } 00:12:17.560 ] 00:12:17.560 }' 00:12:17.560 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.560 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.823 [2024-10-01 13:46:27.917902] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.823 "name": "Existed_Raid", 00:12:17.823 "aliases": [ 00:12:17.823 "627d27d6-82a8-4c8e-9167-e89e83260518" 00:12:17.823 ], 00:12:17.823 "product_name": "Raid Volume", 00:12:17.823 "block_size": 512, 00:12:17.823 "num_blocks": 63488, 00:12:17.823 "uuid": "627d27d6-82a8-4c8e-9167-e89e83260518", 00:12:17.823 "assigned_rate_limits": { 00:12:17.823 "rw_ios_per_sec": 0, 00:12:17.823 "rw_mbytes_per_sec": 0, 00:12:17.823 "r_mbytes_per_sec": 0, 00:12:17.823 "w_mbytes_per_sec": 0 00:12:17.823 }, 00:12:17.823 "claimed": false, 00:12:17.823 "zoned": false, 00:12:17.823 "supported_io_types": { 00:12:17.823 "read": true, 00:12:17.823 "write": true, 00:12:17.823 "unmap": false, 00:12:17.823 "flush": false, 00:12:17.823 "reset": true, 00:12:17.823 "nvme_admin": false, 00:12:17.823 "nvme_io": false, 00:12:17.823 "nvme_io_md": false, 00:12:17.823 "write_zeroes": true, 00:12:17.823 "zcopy": false, 00:12:17.823 "get_zone_info": false, 00:12:17.823 "zone_management": false, 00:12:17.823 "zone_append": false, 00:12:17.823 "compare": false, 00:12:17.823 "compare_and_write": false, 00:12:17.823 "abort": false, 00:12:17.823 "seek_hole": false, 00:12:17.823 "seek_data": false, 00:12:17.823 "copy": false, 00:12:17.823 "nvme_iov_md": false 00:12:17.823 }, 00:12:17.823 "memory_domains": [ 00:12:17.823 { 00:12:17.823 "dma_device_id": "system", 00:12:17.823 "dma_device_type": 1 00:12:17.823 }, 00:12:17.823 { 00:12:17.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.823 "dma_device_type": 2 00:12:17.823 }, 00:12:17.823 { 00:12:17.823 "dma_device_id": "system", 00:12:17.823 "dma_device_type": 1 00:12:17.823 }, 00:12:17.823 { 00:12:17.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.823 "dma_device_type": 2 00:12:17.823 }, 00:12:17.823 { 00:12:17.823 "dma_device_id": "system", 00:12:17.823 "dma_device_type": 1 00:12:17.823 }, 00:12:17.823 { 00:12:17.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.823 "dma_device_type": 2 00:12:17.823 } 00:12:17.823 ], 00:12:17.823 "driver_specific": { 00:12:17.823 "raid": { 00:12:17.823 "uuid": "627d27d6-82a8-4c8e-9167-e89e83260518", 00:12:17.823 "strip_size_kb": 0, 00:12:17.823 "state": "online", 00:12:17.823 "raid_level": "raid1", 00:12:17.823 "superblock": true, 00:12:17.823 "num_base_bdevs": 3, 00:12:17.823 "num_base_bdevs_discovered": 3, 00:12:17.823 "num_base_bdevs_operational": 3, 00:12:17.823 "base_bdevs_list": [ 00:12:17.823 { 00:12:17.823 "name": "BaseBdev1", 00:12:17.823 "uuid": "16791224-f035-4f64-9e2c-03f0c033a0d1", 00:12:17.823 "is_configured": true, 00:12:17.823 "data_offset": 2048, 00:12:17.823 "data_size": 63488 00:12:17.823 }, 00:12:17.823 { 00:12:17.823 "name": "BaseBdev2", 00:12:17.823 "uuid": "9fdaa991-8fd2-47a1-b108-fcafaff6f546", 00:12:17.823 "is_configured": true, 00:12:17.823 "data_offset": 2048, 00:12:17.823 "data_size": 63488 00:12:17.823 }, 00:12:17.823 { 00:12:17.823 "name": "BaseBdev3", 00:12:17.823 "uuid": "5516524e-7f39-46e4-8228-c77a778292c3", 00:12:17.823 "is_configured": true, 00:12:17.823 "data_offset": 2048, 00:12:17.823 "data_size": 63488 00:12:17.823 } 00:12:17.823 ] 00:12:17.823 } 00:12:17.823 } 00:12:17.823 }' 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:17.823 BaseBdev2 00:12:17.823 BaseBdev3' 00:12:17.823 13:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.082 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.082 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.082 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.083 [2024-10-01 13:46:28.161339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.083 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.342 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.342 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.342 "name": "Existed_Raid", 00:12:18.342 "uuid": "627d27d6-82a8-4c8e-9167-e89e83260518", 00:12:18.342 "strip_size_kb": 0, 00:12:18.342 "state": "online", 00:12:18.342 "raid_level": "raid1", 00:12:18.342 "superblock": true, 00:12:18.342 "num_base_bdevs": 3, 00:12:18.342 "num_base_bdevs_discovered": 2, 00:12:18.342 "num_base_bdevs_operational": 2, 00:12:18.342 "base_bdevs_list": [ 00:12:18.342 { 00:12:18.342 "name": null, 00:12:18.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.342 "is_configured": false, 00:12:18.342 "data_offset": 0, 00:12:18.342 "data_size": 63488 00:12:18.342 }, 00:12:18.342 { 00:12:18.342 "name": "BaseBdev2", 00:12:18.342 "uuid": "9fdaa991-8fd2-47a1-b108-fcafaff6f546", 00:12:18.342 "is_configured": true, 00:12:18.342 "data_offset": 2048, 00:12:18.342 "data_size": 63488 00:12:18.342 }, 00:12:18.342 { 00:12:18.342 "name": "BaseBdev3", 00:12:18.342 "uuid": "5516524e-7f39-46e4-8228-c77a778292c3", 00:12:18.342 "is_configured": true, 00:12:18.342 "data_offset": 2048, 00:12:18.342 "data_size": 63488 00:12:18.342 } 00:12:18.342 ] 00:12:18.342 }' 00:12:18.342 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.342 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.602 [2024-10-01 13:46:28.691554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.602 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 [2024-10-01 13:46:28.842635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.862 [2024-10-01 13:46:28.842745] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.862 [2024-10-01 13:46:28.939985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.862 [2024-10-01 13:46:28.940042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.862 [2024-10-01 13:46:28.940057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.862 13:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 BaseBdev2 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.862 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 [ 00:12:19.122 { 00:12:19.122 "name": "BaseBdev2", 00:12:19.122 "aliases": [ 00:12:19.122 "74da2e0f-265b-4f15-879e-9b7402d02967" 00:12:19.122 ], 00:12:19.122 "product_name": "Malloc disk", 00:12:19.122 "block_size": 512, 00:12:19.122 "num_blocks": 65536, 00:12:19.122 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:19.122 "assigned_rate_limits": { 00:12:19.122 "rw_ios_per_sec": 0, 00:12:19.122 "rw_mbytes_per_sec": 0, 00:12:19.122 "r_mbytes_per_sec": 0, 00:12:19.122 "w_mbytes_per_sec": 0 00:12:19.122 }, 00:12:19.122 "claimed": false, 00:12:19.122 "zoned": false, 00:12:19.122 "supported_io_types": { 00:12:19.122 "read": true, 00:12:19.122 "write": true, 00:12:19.122 "unmap": true, 00:12:19.122 "flush": true, 00:12:19.122 "reset": true, 00:12:19.122 "nvme_admin": false, 00:12:19.122 "nvme_io": false, 00:12:19.122 "nvme_io_md": false, 00:12:19.122 "write_zeroes": true, 00:12:19.122 "zcopy": true, 00:12:19.122 "get_zone_info": false, 00:12:19.122 "zone_management": false, 00:12:19.122 "zone_append": false, 00:12:19.122 "compare": false, 00:12:19.122 "compare_and_write": false, 00:12:19.122 "abort": true, 00:12:19.122 "seek_hole": false, 00:12:19.122 "seek_data": false, 00:12:19.122 "copy": true, 00:12:19.122 "nvme_iov_md": false 00:12:19.122 }, 00:12:19.122 "memory_domains": [ 00:12:19.122 { 00:12:19.122 "dma_device_id": "system", 00:12:19.122 "dma_device_type": 1 00:12:19.122 }, 00:12:19.122 { 00:12:19.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.122 "dma_device_type": 2 00:12:19.122 } 00:12:19.122 ], 00:12:19.123 "driver_specific": {} 00:12:19.123 } 00:12:19.123 ] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 BaseBdev3 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 [ 00:12:19.123 { 00:12:19.123 "name": "BaseBdev3", 00:12:19.123 "aliases": [ 00:12:19.123 "cf2c1549-0d54-465e-9452-14e093e267b1" 00:12:19.123 ], 00:12:19.123 "product_name": "Malloc disk", 00:12:19.123 "block_size": 512, 00:12:19.123 "num_blocks": 65536, 00:12:19.123 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:19.123 "assigned_rate_limits": { 00:12:19.123 "rw_ios_per_sec": 0, 00:12:19.123 "rw_mbytes_per_sec": 0, 00:12:19.123 "r_mbytes_per_sec": 0, 00:12:19.123 "w_mbytes_per_sec": 0 00:12:19.123 }, 00:12:19.123 "claimed": false, 00:12:19.123 "zoned": false, 00:12:19.123 "supported_io_types": { 00:12:19.123 "read": true, 00:12:19.123 "write": true, 00:12:19.123 "unmap": true, 00:12:19.123 "flush": true, 00:12:19.123 "reset": true, 00:12:19.123 "nvme_admin": false, 00:12:19.123 "nvme_io": false, 00:12:19.123 "nvme_io_md": false, 00:12:19.123 "write_zeroes": true, 00:12:19.123 "zcopy": true, 00:12:19.123 "get_zone_info": false, 00:12:19.123 "zone_management": false, 00:12:19.123 "zone_append": false, 00:12:19.123 "compare": false, 00:12:19.123 "compare_and_write": false, 00:12:19.123 "abort": true, 00:12:19.123 "seek_hole": false, 00:12:19.123 "seek_data": false, 00:12:19.123 "copy": true, 00:12:19.123 "nvme_iov_md": false 00:12:19.123 }, 00:12:19.123 "memory_domains": [ 00:12:19.123 { 00:12:19.123 "dma_device_id": "system", 00:12:19.123 "dma_device_type": 1 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.123 "dma_device_type": 2 00:12:19.123 } 00:12:19.123 ], 00:12:19.123 "driver_specific": {} 00:12:19.123 } 00:12:19.123 ] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 [2024-10-01 13:46:29.166867] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.123 [2024-10-01 13:46:29.167043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.123 [2024-10-01 13:46:29.167140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.123 [2024-10-01 13:46:29.169276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.123 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.123 "name": "Existed_Raid", 00:12:19.123 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:19.123 "strip_size_kb": 0, 00:12:19.123 "state": "configuring", 00:12:19.123 "raid_level": "raid1", 00:12:19.123 "superblock": true, 00:12:19.123 "num_base_bdevs": 3, 00:12:19.123 "num_base_bdevs_discovered": 2, 00:12:19.123 "num_base_bdevs_operational": 3, 00:12:19.123 "base_bdevs_list": [ 00:12:19.123 { 00:12:19.123 "name": "BaseBdev1", 00:12:19.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.123 "is_configured": false, 00:12:19.123 "data_offset": 0, 00:12:19.123 "data_size": 0 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "BaseBdev2", 00:12:19.123 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "BaseBdev3", 00:12:19.123 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 } 00:12:19.123 ] 00:12:19.123 }' 00:12:19.124 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.124 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.383 [2024-10-01 13:46:29.554435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.383 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.643 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.643 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.643 "name": "Existed_Raid", 00:12:19.643 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:19.643 "strip_size_kb": 0, 00:12:19.643 "state": "configuring", 00:12:19.643 "raid_level": "raid1", 00:12:19.643 "superblock": true, 00:12:19.643 "num_base_bdevs": 3, 00:12:19.643 "num_base_bdevs_discovered": 1, 00:12:19.643 "num_base_bdevs_operational": 3, 00:12:19.643 "base_bdevs_list": [ 00:12:19.643 { 00:12:19.643 "name": "BaseBdev1", 00:12:19.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.643 "is_configured": false, 00:12:19.643 "data_offset": 0, 00:12:19.643 "data_size": 0 00:12:19.643 }, 00:12:19.643 { 00:12:19.643 "name": null, 00:12:19.643 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:19.643 "is_configured": false, 00:12:19.643 "data_offset": 0, 00:12:19.643 "data_size": 63488 00:12:19.643 }, 00:12:19.643 { 00:12:19.643 "name": "BaseBdev3", 00:12:19.643 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:19.643 "is_configured": true, 00:12:19.643 "data_offset": 2048, 00:12:19.643 "data_size": 63488 00:12:19.643 } 00:12:19.643 ] 00:12:19.643 }' 00:12:19.643 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.643 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.902 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.902 13:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.902 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.902 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.902 13:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.902 [2024-10-01 13:46:30.049913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.902 BaseBdev1 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.902 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.902 [ 00:12:19.902 { 00:12:19.902 "name": "BaseBdev1", 00:12:19.902 "aliases": [ 00:12:19.902 "c35e5f1e-8b20-49be-a724-1768c76e40dd" 00:12:19.902 ], 00:12:19.902 "product_name": "Malloc disk", 00:12:19.902 "block_size": 512, 00:12:19.902 "num_blocks": 65536, 00:12:19.902 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:19.902 "assigned_rate_limits": { 00:12:19.902 "rw_ios_per_sec": 0, 00:12:19.902 "rw_mbytes_per_sec": 0, 00:12:19.902 "r_mbytes_per_sec": 0, 00:12:19.902 "w_mbytes_per_sec": 0 00:12:19.902 }, 00:12:19.902 "claimed": true, 00:12:19.902 "claim_type": "exclusive_write", 00:12:19.902 "zoned": false, 00:12:19.902 "supported_io_types": { 00:12:19.902 "read": true, 00:12:19.902 "write": true, 00:12:19.902 "unmap": true, 00:12:19.902 "flush": true, 00:12:19.902 "reset": true, 00:12:19.902 "nvme_admin": false, 00:12:19.902 "nvme_io": false, 00:12:19.902 "nvme_io_md": false, 00:12:19.902 "write_zeroes": true, 00:12:19.902 "zcopy": true, 00:12:19.902 "get_zone_info": false, 00:12:19.902 "zone_management": false, 00:12:19.902 "zone_append": false, 00:12:19.902 "compare": false, 00:12:19.902 "compare_and_write": false, 00:12:19.902 "abort": true, 00:12:19.902 "seek_hole": false, 00:12:19.902 "seek_data": false, 00:12:19.902 "copy": true, 00:12:20.162 "nvme_iov_md": false 00:12:20.162 }, 00:12:20.162 "memory_domains": [ 00:12:20.162 { 00:12:20.162 "dma_device_id": "system", 00:12:20.162 "dma_device_type": 1 00:12:20.162 }, 00:12:20.162 { 00:12:20.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.162 "dma_device_type": 2 00:12:20.162 } 00:12:20.162 ], 00:12:20.162 "driver_specific": {} 00:12:20.162 } 00:12:20.162 ] 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.162 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.162 "name": "Existed_Raid", 00:12:20.162 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:20.162 "strip_size_kb": 0, 00:12:20.162 "state": "configuring", 00:12:20.162 "raid_level": "raid1", 00:12:20.162 "superblock": true, 00:12:20.162 "num_base_bdevs": 3, 00:12:20.162 "num_base_bdevs_discovered": 2, 00:12:20.162 "num_base_bdevs_operational": 3, 00:12:20.162 "base_bdevs_list": [ 00:12:20.162 { 00:12:20.162 "name": "BaseBdev1", 00:12:20.162 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:20.163 "is_configured": true, 00:12:20.163 "data_offset": 2048, 00:12:20.163 "data_size": 63488 00:12:20.163 }, 00:12:20.163 { 00:12:20.163 "name": null, 00:12:20.163 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:20.163 "is_configured": false, 00:12:20.163 "data_offset": 0, 00:12:20.163 "data_size": 63488 00:12:20.163 }, 00:12:20.163 { 00:12:20.163 "name": "BaseBdev3", 00:12:20.163 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:20.163 "is_configured": true, 00:12:20.163 "data_offset": 2048, 00:12:20.163 "data_size": 63488 00:12:20.163 } 00:12:20.163 ] 00:12:20.163 }' 00:12:20.163 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.163 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.422 [2024-10-01 13:46:30.549433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.422 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.422 "name": "Existed_Raid", 00:12:20.422 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:20.422 "strip_size_kb": 0, 00:12:20.423 "state": "configuring", 00:12:20.423 "raid_level": "raid1", 00:12:20.423 "superblock": true, 00:12:20.423 "num_base_bdevs": 3, 00:12:20.423 "num_base_bdevs_discovered": 1, 00:12:20.423 "num_base_bdevs_operational": 3, 00:12:20.423 "base_bdevs_list": [ 00:12:20.423 { 00:12:20.423 "name": "BaseBdev1", 00:12:20.423 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:20.423 "is_configured": true, 00:12:20.423 "data_offset": 2048, 00:12:20.423 "data_size": 63488 00:12:20.423 }, 00:12:20.423 { 00:12:20.423 "name": null, 00:12:20.423 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:20.423 "is_configured": false, 00:12:20.423 "data_offset": 0, 00:12:20.423 "data_size": 63488 00:12:20.423 }, 00:12:20.423 { 00:12:20.423 "name": null, 00:12:20.423 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:20.423 "is_configured": false, 00:12:20.423 "data_offset": 0, 00:12:20.423 "data_size": 63488 00:12:20.423 } 00:12:20.423 ] 00:12:20.423 }' 00:12:20.423 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.423 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.991 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.991 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.991 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.991 13:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.991 13:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.991 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:20.991 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:20.991 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.991 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.992 [2024-10-01 13:46:31.028741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.992 "name": "Existed_Raid", 00:12:20.992 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:20.992 "strip_size_kb": 0, 00:12:20.992 "state": "configuring", 00:12:20.992 "raid_level": "raid1", 00:12:20.992 "superblock": true, 00:12:20.992 "num_base_bdevs": 3, 00:12:20.992 "num_base_bdevs_discovered": 2, 00:12:20.992 "num_base_bdevs_operational": 3, 00:12:20.992 "base_bdevs_list": [ 00:12:20.992 { 00:12:20.992 "name": "BaseBdev1", 00:12:20.992 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:20.992 "is_configured": true, 00:12:20.992 "data_offset": 2048, 00:12:20.992 "data_size": 63488 00:12:20.992 }, 00:12:20.992 { 00:12:20.992 "name": null, 00:12:20.992 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:20.992 "is_configured": false, 00:12:20.992 "data_offset": 0, 00:12:20.992 "data_size": 63488 00:12:20.992 }, 00:12:20.992 { 00:12:20.992 "name": "BaseBdev3", 00:12:20.992 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:20.992 "is_configured": true, 00:12:20.992 "data_offset": 2048, 00:12:20.992 "data_size": 63488 00:12:20.992 } 00:12:20.992 ] 00:12:20.992 }' 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.992 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.251 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.251 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:21.251 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.251 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.511 [2024-10-01 13:46:31.476230] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.511 "name": "Existed_Raid", 00:12:21.511 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:21.511 "strip_size_kb": 0, 00:12:21.511 "state": "configuring", 00:12:21.511 "raid_level": "raid1", 00:12:21.511 "superblock": true, 00:12:21.511 "num_base_bdevs": 3, 00:12:21.511 "num_base_bdevs_discovered": 1, 00:12:21.511 "num_base_bdevs_operational": 3, 00:12:21.511 "base_bdevs_list": [ 00:12:21.511 { 00:12:21.511 "name": null, 00:12:21.511 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:21.511 "is_configured": false, 00:12:21.511 "data_offset": 0, 00:12:21.511 "data_size": 63488 00:12:21.511 }, 00:12:21.511 { 00:12:21.511 "name": null, 00:12:21.511 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:21.511 "is_configured": false, 00:12:21.511 "data_offset": 0, 00:12:21.511 "data_size": 63488 00:12:21.511 }, 00:12:21.511 { 00:12:21.511 "name": "BaseBdev3", 00:12:21.511 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:21.511 "is_configured": true, 00:12:21.511 "data_offset": 2048, 00:12:21.511 "data_size": 63488 00:12:21.511 } 00:12:21.511 ] 00:12:21.511 }' 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.511 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.079 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.080 13:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.080 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.080 13:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.080 [2024-10-01 13:46:32.026308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.080 "name": "Existed_Raid", 00:12:22.080 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:22.080 "strip_size_kb": 0, 00:12:22.080 "state": "configuring", 00:12:22.080 "raid_level": "raid1", 00:12:22.080 "superblock": true, 00:12:22.080 "num_base_bdevs": 3, 00:12:22.080 "num_base_bdevs_discovered": 2, 00:12:22.080 "num_base_bdevs_operational": 3, 00:12:22.080 "base_bdevs_list": [ 00:12:22.080 { 00:12:22.080 "name": null, 00:12:22.080 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:22.080 "is_configured": false, 00:12:22.080 "data_offset": 0, 00:12:22.080 "data_size": 63488 00:12:22.080 }, 00:12:22.080 { 00:12:22.080 "name": "BaseBdev2", 00:12:22.080 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:22.080 "is_configured": true, 00:12:22.080 "data_offset": 2048, 00:12:22.080 "data_size": 63488 00:12:22.080 }, 00:12:22.080 { 00:12:22.080 "name": "BaseBdev3", 00:12:22.080 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:22.080 "is_configured": true, 00:12:22.080 "data_offset": 2048, 00:12:22.080 "data_size": 63488 00:12:22.080 } 00:12:22.080 ] 00:12:22.080 }' 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.080 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c35e5f1e-8b20-49be-a724-1768c76e40dd 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.340 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.600 [2024-10-01 13:46:32.568983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:22.600 [2024-10-01 13:46:32.569231] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:22.600 [2024-10-01 13:46:32.569245] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.600 [2024-10-01 13:46:32.569535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:22.600 [2024-10-01 13:46:32.569723] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:22.600 [2024-10-01 13:46:32.569739] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:22.600 NewBaseBdev 00:12:22.600 [2024-10-01 13:46:32.569893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.600 [ 00:12:22.600 { 00:12:22.600 "name": "NewBaseBdev", 00:12:22.600 "aliases": [ 00:12:22.600 "c35e5f1e-8b20-49be-a724-1768c76e40dd" 00:12:22.600 ], 00:12:22.600 "product_name": "Malloc disk", 00:12:22.600 "block_size": 512, 00:12:22.600 "num_blocks": 65536, 00:12:22.600 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:22.600 "assigned_rate_limits": { 00:12:22.600 "rw_ios_per_sec": 0, 00:12:22.600 "rw_mbytes_per_sec": 0, 00:12:22.600 "r_mbytes_per_sec": 0, 00:12:22.600 "w_mbytes_per_sec": 0 00:12:22.600 }, 00:12:22.600 "claimed": true, 00:12:22.600 "claim_type": "exclusive_write", 00:12:22.600 "zoned": false, 00:12:22.600 "supported_io_types": { 00:12:22.600 "read": true, 00:12:22.600 "write": true, 00:12:22.600 "unmap": true, 00:12:22.600 "flush": true, 00:12:22.600 "reset": true, 00:12:22.600 "nvme_admin": false, 00:12:22.600 "nvme_io": false, 00:12:22.600 "nvme_io_md": false, 00:12:22.600 "write_zeroes": true, 00:12:22.600 "zcopy": true, 00:12:22.600 "get_zone_info": false, 00:12:22.600 "zone_management": false, 00:12:22.600 "zone_append": false, 00:12:22.600 "compare": false, 00:12:22.600 "compare_and_write": false, 00:12:22.600 "abort": true, 00:12:22.600 "seek_hole": false, 00:12:22.600 "seek_data": false, 00:12:22.600 "copy": true, 00:12:22.600 "nvme_iov_md": false 00:12:22.600 }, 00:12:22.600 "memory_domains": [ 00:12:22.600 { 00:12:22.600 "dma_device_id": "system", 00:12:22.600 "dma_device_type": 1 00:12:22.600 }, 00:12:22.600 { 00:12:22.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.600 "dma_device_type": 2 00:12:22.600 } 00:12:22.600 ], 00:12:22.600 "driver_specific": {} 00:12:22.600 } 00:12:22.600 ] 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.600 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.600 "name": "Existed_Raid", 00:12:22.600 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:22.600 "strip_size_kb": 0, 00:12:22.600 "state": "online", 00:12:22.600 "raid_level": "raid1", 00:12:22.600 "superblock": true, 00:12:22.600 "num_base_bdevs": 3, 00:12:22.600 "num_base_bdevs_discovered": 3, 00:12:22.600 "num_base_bdevs_operational": 3, 00:12:22.600 "base_bdevs_list": [ 00:12:22.600 { 00:12:22.600 "name": "NewBaseBdev", 00:12:22.600 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:22.600 "is_configured": true, 00:12:22.600 "data_offset": 2048, 00:12:22.600 "data_size": 63488 00:12:22.600 }, 00:12:22.600 { 00:12:22.600 "name": "BaseBdev2", 00:12:22.600 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:22.600 "is_configured": true, 00:12:22.600 "data_offset": 2048, 00:12:22.600 "data_size": 63488 00:12:22.600 }, 00:12:22.601 { 00:12:22.601 "name": "BaseBdev3", 00:12:22.601 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:22.601 "is_configured": true, 00:12:22.601 "data_offset": 2048, 00:12:22.601 "data_size": 63488 00:12:22.601 } 00:12:22.601 ] 00:12:22.601 }' 00:12:22.601 13:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.601 13:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.861 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.861 [2024-10-01 13:46:33.036776] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.120 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.120 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.120 "name": "Existed_Raid", 00:12:23.120 "aliases": [ 00:12:23.120 "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6" 00:12:23.120 ], 00:12:23.120 "product_name": "Raid Volume", 00:12:23.120 "block_size": 512, 00:12:23.120 "num_blocks": 63488, 00:12:23.120 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:23.120 "assigned_rate_limits": { 00:12:23.120 "rw_ios_per_sec": 0, 00:12:23.120 "rw_mbytes_per_sec": 0, 00:12:23.120 "r_mbytes_per_sec": 0, 00:12:23.120 "w_mbytes_per_sec": 0 00:12:23.120 }, 00:12:23.120 "claimed": false, 00:12:23.120 "zoned": false, 00:12:23.120 "supported_io_types": { 00:12:23.120 "read": true, 00:12:23.120 "write": true, 00:12:23.120 "unmap": false, 00:12:23.120 "flush": false, 00:12:23.120 "reset": true, 00:12:23.120 "nvme_admin": false, 00:12:23.120 "nvme_io": false, 00:12:23.120 "nvme_io_md": false, 00:12:23.120 "write_zeroes": true, 00:12:23.120 "zcopy": false, 00:12:23.120 "get_zone_info": false, 00:12:23.120 "zone_management": false, 00:12:23.120 "zone_append": false, 00:12:23.120 "compare": false, 00:12:23.120 "compare_and_write": false, 00:12:23.120 "abort": false, 00:12:23.120 "seek_hole": false, 00:12:23.120 "seek_data": false, 00:12:23.120 "copy": false, 00:12:23.120 "nvme_iov_md": false 00:12:23.120 }, 00:12:23.120 "memory_domains": [ 00:12:23.120 { 00:12:23.120 "dma_device_id": "system", 00:12:23.120 "dma_device_type": 1 00:12:23.120 }, 00:12:23.120 { 00:12:23.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.120 "dma_device_type": 2 00:12:23.120 }, 00:12:23.120 { 00:12:23.120 "dma_device_id": "system", 00:12:23.120 "dma_device_type": 1 00:12:23.120 }, 00:12:23.120 { 00:12:23.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.120 "dma_device_type": 2 00:12:23.120 }, 00:12:23.120 { 00:12:23.120 "dma_device_id": "system", 00:12:23.120 "dma_device_type": 1 00:12:23.120 }, 00:12:23.120 { 00:12:23.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.120 "dma_device_type": 2 00:12:23.120 } 00:12:23.120 ], 00:12:23.120 "driver_specific": { 00:12:23.120 "raid": { 00:12:23.120 "uuid": "3088c666-1bf9-4f6a-98a5-a684ed3c5aa6", 00:12:23.120 "strip_size_kb": 0, 00:12:23.120 "state": "online", 00:12:23.120 "raid_level": "raid1", 00:12:23.120 "superblock": true, 00:12:23.120 "num_base_bdevs": 3, 00:12:23.120 "num_base_bdevs_discovered": 3, 00:12:23.120 "num_base_bdevs_operational": 3, 00:12:23.120 "base_bdevs_list": [ 00:12:23.120 { 00:12:23.120 "name": "NewBaseBdev", 00:12:23.120 "uuid": "c35e5f1e-8b20-49be-a724-1768c76e40dd", 00:12:23.120 "is_configured": true, 00:12:23.120 "data_offset": 2048, 00:12:23.120 "data_size": 63488 00:12:23.120 }, 00:12:23.120 { 00:12:23.120 "name": "BaseBdev2", 00:12:23.120 "uuid": "74da2e0f-265b-4f15-879e-9b7402d02967", 00:12:23.120 "is_configured": true, 00:12:23.120 "data_offset": 2048, 00:12:23.120 "data_size": 63488 00:12:23.120 }, 00:12:23.120 { 00:12:23.120 "name": "BaseBdev3", 00:12:23.120 "uuid": "cf2c1549-0d54-465e-9452-14e093e267b1", 00:12:23.120 "is_configured": true, 00:12:23.120 "data_offset": 2048, 00:12:23.120 "data_size": 63488 00:12:23.120 } 00:12:23.120 ] 00:12:23.120 } 00:12:23.120 } 00:12:23.120 }' 00:12:23.120 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.120 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:23.120 BaseBdev2 00:12:23.120 BaseBdev3' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.121 [2024-10-01 13:46:33.292058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.121 [2024-10-01 13:46:33.292206] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.121 [2024-10-01 13:46:33.292364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.121 [2024-10-01 13:46:33.292744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.121 [2024-10-01 13:46:33.292868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67918 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 67918 ']' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 67918 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.121 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67918 00:12:23.380 killing process with pid 67918 00:12:23.380 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.380 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.380 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67918' 00:12:23.380 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 67918 00:12:23.380 [2024-10-01 13:46:33.333888] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.380 13:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 67918 00:12:23.638 [2024-10-01 13:46:33.637020] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.031 13:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:25.031 00:12:25.031 real 0m10.327s 00:12:25.031 user 0m16.251s 00:12:25.031 sys 0m1.948s 00:12:25.031 13:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.031 ************************************ 00:12:25.031 END TEST raid_state_function_test_sb 00:12:25.031 ************************************ 00:12:25.031 13:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.031 13:46:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:25.031 13:46:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.031 13:46:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.031 13:46:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.031 ************************************ 00:12:25.031 START TEST raid_superblock_test 00:12:25.031 ************************************ 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68534 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68534 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68534 ']' 00:12:25.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.031 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.031 [2024-10-01 13:46:35.093233] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:25.031 [2024-10-01 13:46:35.094197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68534 ] 00:12:25.289 [2024-10-01 13:46:35.268085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.546 [2024-10-01 13:46:35.503870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.546 [2024-10-01 13:46:35.720789] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.547 [2024-10-01 13:46:35.721047] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.805 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 malloc1 00:12:26.064 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.064 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 13:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 [2024-10-01 13:46:36.003461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.064 [2024-10-01 13:46:36.003698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.064 [2024-10-01 13:46:36.003763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:26.064 [2024-10-01 13:46:36.003858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.064 [2024-10-01 13:46:36.006393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.064 [2024-10-01 13:46:36.006561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.064 pt1 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 malloc2 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 [2024-10-01 13:46:36.071420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.064 [2024-10-01 13:46:36.071594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.064 [2024-10-01 13:46:36.071673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:26.064 [2024-10-01 13:46:36.071753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.064 [2024-10-01 13:46:36.074164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.064 [2024-10-01 13:46:36.074294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.064 pt2 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 malloc3 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 [2024-10-01 13:46:36.130642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.064 [2024-10-01 13:46:36.130814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.064 [2024-10-01 13:46:36.130874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:26.064 [2024-10-01 13:46:36.130889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.064 [2024-10-01 13:46:36.133443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.064 [2024-10-01 13:46:36.133484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.064 pt3 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 [2024-10-01 13:46:36.142679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.064 [2024-10-01 13:46:36.145029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.064 [2024-10-01 13:46:36.145241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.064 [2024-10-01 13:46:36.145446] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:26.064 [2024-10-01 13:46:36.145465] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.064 [2024-10-01 13:46:36.145752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:26.064 [2024-10-01 13:46:36.145957] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:26.064 [2024-10-01 13:46:36.145969] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:26.064 [2024-10-01 13:46:36.146145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.064 "name": "raid_bdev1", 00:12:26.064 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:26.064 "strip_size_kb": 0, 00:12:26.064 "state": "online", 00:12:26.064 "raid_level": "raid1", 00:12:26.064 "superblock": true, 00:12:26.064 "num_base_bdevs": 3, 00:12:26.064 "num_base_bdevs_discovered": 3, 00:12:26.064 "num_base_bdevs_operational": 3, 00:12:26.064 "base_bdevs_list": [ 00:12:26.064 { 00:12:26.064 "name": "pt1", 00:12:26.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.064 "is_configured": true, 00:12:26.064 "data_offset": 2048, 00:12:26.064 "data_size": 63488 00:12:26.064 }, 00:12:26.064 { 00:12:26.064 "name": "pt2", 00:12:26.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.064 "is_configured": true, 00:12:26.064 "data_offset": 2048, 00:12:26.064 "data_size": 63488 00:12:26.064 }, 00:12:26.064 { 00:12:26.064 "name": "pt3", 00:12:26.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.064 "is_configured": true, 00:12:26.064 "data_offset": 2048, 00:12:26.064 "data_size": 63488 00:12:26.064 } 00:12:26.064 ] 00:12:26.064 }' 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.064 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.630 [2024-10-01 13:46:36.550440] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.630 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.630 "name": "raid_bdev1", 00:12:26.631 "aliases": [ 00:12:26.631 "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6" 00:12:26.631 ], 00:12:26.631 "product_name": "Raid Volume", 00:12:26.631 "block_size": 512, 00:12:26.631 "num_blocks": 63488, 00:12:26.631 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:26.631 "assigned_rate_limits": { 00:12:26.631 "rw_ios_per_sec": 0, 00:12:26.631 "rw_mbytes_per_sec": 0, 00:12:26.631 "r_mbytes_per_sec": 0, 00:12:26.631 "w_mbytes_per_sec": 0 00:12:26.631 }, 00:12:26.631 "claimed": false, 00:12:26.631 "zoned": false, 00:12:26.631 "supported_io_types": { 00:12:26.631 "read": true, 00:12:26.631 "write": true, 00:12:26.631 "unmap": false, 00:12:26.631 "flush": false, 00:12:26.631 "reset": true, 00:12:26.631 "nvme_admin": false, 00:12:26.631 "nvme_io": false, 00:12:26.631 "nvme_io_md": false, 00:12:26.631 "write_zeroes": true, 00:12:26.631 "zcopy": false, 00:12:26.631 "get_zone_info": false, 00:12:26.631 "zone_management": false, 00:12:26.631 "zone_append": false, 00:12:26.631 "compare": false, 00:12:26.631 "compare_and_write": false, 00:12:26.631 "abort": false, 00:12:26.631 "seek_hole": false, 00:12:26.631 "seek_data": false, 00:12:26.631 "copy": false, 00:12:26.631 "nvme_iov_md": false 00:12:26.631 }, 00:12:26.631 "memory_domains": [ 00:12:26.631 { 00:12:26.631 "dma_device_id": "system", 00:12:26.631 "dma_device_type": 1 00:12:26.631 }, 00:12:26.631 { 00:12:26.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.631 "dma_device_type": 2 00:12:26.631 }, 00:12:26.631 { 00:12:26.631 "dma_device_id": "system", 00:12:26.631 "dma_device_type": 1 00:12:26.631 }, 00:12:26.631 { 00:12:26.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.631 "dma_device_type": 2 00:12:26.631 }, 00:12:26.631 { 00:12:26.631 "dma_device_id": "system", 00:12:26.631 "dma_device_type": 1 00:12:26.631 }, 00:12:26.631 { 00:12:26.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.631 "dma_device_type": 2 00:12:26.631 } 00:12:26.631 ], 00:12:26.631 "driver_specific": { 00:12:26.631 "raid": { 00:12:26.631 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:26.631 "strip_size_kb": 0, 00:12:26.631 "state": "online", 00:12:26.631 "raid_level": "raid1", 00:12:26.631 "superblock": true, 00:12:26.631 "num_base_bdevs": 3, 00:12:26.631 "num_base_bdevs_discovered": 3, 00:12:26.631 "num_base_bdevs_operational": 3, 00:12:26.631 "base_bdevs_list": [ 00:12:26.631 { 00:12:26.631 "name": "pt1", 00:12:26.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.631 "is_configured": true, 00:12:26.631 "data_offset": 2048, 00:12:26.631 "data_size": 63488 00:12:26.631 }, 00:12:26.631 { 00:12:26.631 "name": "pt2", 00:12:26.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.631 "is_configured": true, 00:12:26.631 "data_offset": 2048, 00:12:26.631 "data_size": 63488 00:12:26.631 }, 00:12:26.631 { 00:12:26.631 "name": "pt3", 00:12:26.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.631 "is_configured": true, 00:12:26.631 "data_offset": 2048, 00:12:26.631 "data_size": 63488 00:12:26.631 } 00:12:26.631 ] 00:12:26.631 } 00:12:26.631 } 00:12:26.631 }' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:26.631 pt2 00:12:26.631 pt3' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.631 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.632 [2024-10-01 13:46:36.806029] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6 ']' 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 [2024-10-01 13:46:36.849663] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.890 [2024-10-01 13:46:36.849820] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.890 [2024-10-01 13:46:36.849974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.890 [2024-10-01 13:46:36.850162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.890 [2024-10-01 13:46:36.850264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 [2024-10-01 13:46:36.993530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:26.890 [2024-10-01 13:46:36.995933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:26.890 [2024-10-01 13:46:36.996108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:26.890 [2024-10-01 13:46:36.996173] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:26.890 [2024-10-01 13:46:36.996231] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:26.890 [2024-10-01 13:46:36.996255] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:26.890 [2024-10-01 13:46:36.996277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.890 [2024-10-01 13:46:36.996288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:26.890 request: 00:12:26.890 { 00:12:26.890 "name": "raid_bdev1", 00:12:26.890 "raid_level": "raid1", 00:12:26.890 "base_bdevs": [ 00:12:26.890 "malloc1", 00:12:26.890 "malloc2", 00:12:26.890 "malloc3" 00:12:26.890 ], 00:12:26.890 "superblock": false, 00:12:26.890 "method": "bdev_raid_create", 00:12:26.890 "req_id": 1 00:12:26.890 } 00:12:26.890 Got JSON-RPC error response 00:12:26.890 response: 00:12:26.890 { 00:12:26.890 "code": -17, 00:12:26.890 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:26.890 } 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 [2024-10-01 13:46:37.053403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.891 [2024-10-01 13:46:37.053500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.891 [2024-10-01 13:46:37.053531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:26.891 [2024-10-01 13:46:37.053544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.891 [2024-10-01 13:46:37.056188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.891 [2024-10-01 13:46:37.056232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.891 [2024-10-01 13:46:37.056327] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:26.891 [2024-10-01 13:46:37.056380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.891 pt1 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.891 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.149 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.149 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.149 "name": "raid_bdev1", 00:12:27.149 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:27.149 "strip_size_kb": 0, 00:12:27.149 "state": "configuring", 00:12:27.149 "raid_level": "raid1", 00:12:27.149 "superblock": true, 00:12:27.149 "num_base_bdevs": 3, 00:12:27.149 "num_base_bdevs_discovered": 1, 00:12:27.149 "num_base_bdevs_operational": 3, 00:12:27.149 "base_bdevs_list": [ 00:12:27.149 { 00:12:27.149 "name": "pt1", 00:12:27.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.149 "is_configured": true, 00:12:27.149 "data_offset": 2048, 00:12:27.149 "data_size": 63488 00:12:27.149 }, 00:12:27.149 { 00:12:27.149 "name": null, 00:12:27.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.149 "is_configured": false, 00:12:27.149 "data_offset": 2048, 00:12:27.149 "data_size": 63488 00:12:27.149 }, 00:12:27.149 { 00:12:27.149 "name": null, 00:12:27.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.149 "is_configured": false, 00:12:27.149 "data_offset": 2048, 00:12:27.149 "data_size": 63488 00:12:27.149 } 00:12:27.149 ] 00:12:27.149 }' 00:12:27.149 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.149 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.406 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:27.406 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.407 [2024-10-01 13:46:37.492808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.407 [2024-10-01 13:46:37.493016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.407 [2024-10-01 13:46:37.493091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:27.407 [2024-10-01 13:46:37.493181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.407 [2024-10-01 13:46:37.493695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.407 [2024-10-01 13:46:37.493803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.407 [2024-10-01 13:46:37.493931] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.407 [2024-10-01 13:46:37.494053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.407 pt2 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.407 [2024-10-01 13:46:37.500795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.407 "name": "raid_bdev1", 00:12:27.407 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:27.407 "strip_size_kb": 0, 00:12:27.407 "state": "configuring", 00:12:27.407 "raid_level": "raid1", 00:12:27.407 "superblock": true, 00:12:27.407 "num_base_bdevs": 3, 00:12:27.407 "num_base_bdevs_discovered": 1, 00:12:27.407 "num_base_bdevs_operational": 3, 00:12:27.407 "base_bdevs_list": [ 00:12:27.407 { 00:12:27.407 "name": "pt1", 00:12:27.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.407 "is_configured": true, 00:12:27.407 "data_offset": 2048, 00:12:27.407 "data_size": 63488 00:12:27.407 }, 00:12:27.407 { 00:12:27.407 "name": null, 00:12:27.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.407 "is_configured": false, 00:12:27.407 "data_offset": 0, 00:12:27.407 "data_size": 63488 00:12:27.407 }, 00:12:27.407 { 00:12:27.407 "name": null, 00:12:27.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.407 "is_configured": false, 00:12:27.407 "data_offset": 2048, 00:12:27.407 "data_size": 63488 00:12:27.407 } 00:12:27.407 ] 00:12:27.407 }' 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.407 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.974 [2024-10-01 13:46:37.900443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.974 [2024-10-01 13:46:37.900521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.974 [2024-10-01 13:46:37.900544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:27.974 [2024-10-01 13:46:37.900559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.974 [2024-10-01 13:46:37.901044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.974 [2024-10-01 13:46:37.901067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.974 [2024-10-01 13:46:37.901157] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.974 [2024-10-01 13:46:37.901193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.974 pt2 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.974 [2024-10-01 13:46:37.912421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:27.974 [2024-10-01 13:46:37.912593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.974 [2024-10-01 13:46:37.912653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.974 [2024-10-01 13:46:37.912746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.974 [2024-10-01 13:46:37.913165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.974 [2024-10-01 13:46:37.913282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:27.974 [2024-10-01 13:46:37.913385] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:27.974 [2024-10-01 13:46:37.913491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.974 [2024-10-01 13:46:37.913700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.974 [2024-10-01 13:46:37.913739] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.974 [2024-10-01 13:46:37.914011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:27.974 [2024-10-01 13:46:37.914194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.974 [2024-10-01 13:46:37.914252] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.974 [2024-10-01 13:46:37.914536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.974 pt3 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.974 "name": "raid_bdev1", 00:12:27.974 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:27.974 "strip_size_kb": 0, 00:12:27.974 "state": "online", 00:12:27.974 "raid_level": "raid1", 00:12:27.974 "superblock": true, 00:12:27.974 "num_base_bdevs": 3, 00:12:27.974 "num_base_bdevs_discovered": 3, 00:12:27.974 "num_base_bdevs_operational": 3, 00:12:27.974 "base_bdevs_list": [ 00:12:27.974 { 00:12:27.974 "name": "pt1", 00:12:27.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.974 "is_configured": true, 00:12:27.974 "data_offset": 2048, 00:12:27.974 "data_size": 63488 00:12:27.974 }, 00:12:27.974 { 00:12:27.974 "name": "pt2", 00:12:27.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.974 "is_configured": true, 00:12:27.974 "data_offset": 2048, 00:12:27.974 "data_size": 63488 00:12:27.974 }, 00:12:27.974 { 00:12:27.974 "name": "pt3", 00:12:27.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.974 "is_configured": true, 00:12:27.974 "data_offset": 2048, 00:12:27.974 "data_size": 63488 00:12:27.974 } 00:12:27.974 ] 00:12:27.974 }' 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.974 13:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.290 [2024-10-01 13:46:38.340121] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.290 "name": "raid_bdev1", 00:12:28.290 "aliases": [ 00:12:28.290 "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6" 00:12:28.290 ], 00:12:28.290 "product_name": "Raid Volume", 00:12:28.290 "block_size": 512, 00:12:28.290 "num_blocks": 63488, 00:12:28.290 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:28.290 "assigned_rate_limits": { 00:12:28.290 "rw_ios_per_sec": 0, 00:12:28.290 "rw_mbytes_per_sec": 0, 00:12:28.290 "r_mbytes_per_sec": 0, 00:12:28.290 "w_mbytes_per_sec": 0 00:12:28.290 }, 00:12:28.290 "claimed": false, 00:12:28.290 "zoned": false, 00:12:28.290 "supported_io_types": { 00:12:28.290 "read": true, 00:12:28.290 "write": true, 00:12:28.290 "unmap": false, 00:12:28.290 "flush": false, 00:12:28.290 "reset": true, 00:12:28.290 "nvme_admin": false, 00:12:28.290 "nvme_io": false, 00:12:28.290 "nvme_io_md": false, 00:12:28.290 "write_zeroes": true, 00:12:28.290 "zcopy": false, 00:12:28.290 "get_zone_info": false, 00:12:28.290 "zone_management": false, 00:12:28.290 "zone_append": false, 00:12:28.290 "compare": false, 00:12:28.290 "compare_and_write": false, 00:12:28.290 "abort": false, 00:12:28.290 "seek_hole": false, 00:12:28.290 "seek_data": false, 00:12:28.290 "copy": false, 00:12:28.290 "nvme_iov_md": false 00:12:28.290 }, 00:12:28.290 "memory_domains": [ 00:12:28.290 { 00:12:28.290 "dma_device_id": "system", 00:12:28.290 "dma_device_type": 1 00:12:28.290 }, 00:12:28.290 { 00:12:28.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.290 "dma_device_type": 2 00:12:28.290 }, 00:12:28.290 { 00:12:28.290 "dma_device_id": "system", 00:12:28.290 "dma_device_type": 1 00:12:28.290 }, 00:12:28.290 { 00:12:28.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.290 "dma_device_type": 2 00:12:28.290 }, 00:12:28.290 { 00:12:28.290 "dma_device_id": "system", 00:12:28.290 "dma_device_type": 1 00:12:28.290 }, 00:12:28.290 { 00:12:28.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.290 "dma_device_type": 2 00:12:28.290 } 00:12:28.290 ], 00:12:28.290 "driver_specific": { 00:12:28.290 "raid": { 00:12:28.290 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:28.290 "strip_size_kb": 0, 00:12:28.290 "state": "online", 00:12:28.290 "raid_level": "raid1", 00:12:28.290 "superblock": true, 00:12:28.290 "num_base_bdevs": 3, 00:12:28.290 "num_base_bdevs_discovered": 3, 00:12:28.290 "num_base_bdevs_operational": 3, 00:12:28.290 "base_bdevs_list": [ 00:12:28.290 { 00:12:28.290 "name": "pt1", 00:12:28.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.290 "is_configured": true, 00:12:28.290 "data_offset": 2048, 00:12:28.290 "data_size": 63488 00:12:28.290 }, 00:12:28.290 { 00:12:28.290 "name": "pt2", 00:12:28.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.290 "is_configured": true, 00:12:28.290 "data_offset": 2048, 00:12:28.290 "data_size": 63488 00:12:28.290 }, 00:12:28.290 { 00:12:28.290 "name": "pt3", 00:12:28.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.290 "is_configured": true, 00:12:28.290 "data_offset": 2048, 00:12:28.290 "data_size": 63488 00:12:28.290 } 00:12:28.290 ] 00:12:28.290 } 00:12:28.290 } 00:12:28.290 }' 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:28.290 pt2 00:12:28.290 pt3' 00:12:28.290 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.549 [2024-10-01 13:46:38.631852] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6 '!=' 0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6 ']' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.549 [2024-10-01 13:46:38.679589] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.549 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.549 "name": "raid_bdev1", 00:12:28.549 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:28.549 "strip_size_kb": 0, 00:12:28.549 "state": "online", 00:12:28.549 "raid_level": "raid1", 00:12:28.549 "superblock": true, 00:12:28.550 "num_base_bdevs": 3, 00:12:28.550 "num_base_bdevs_discovered": 2, 00:12:28.550 "num_base_bdevs_operational": 2, 00:12:28.550 "base_bdevs_list": [ 00:12:28.550 { 00:12:28.550 "name": null, 00:12:28.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.550 "is_configured": false, 00:12:28.550 "data_offset": 0, 00:12:28.550 "data_size": 63488 00:12:28.550 }, 00:12:28.550 { 00:12:28.550 "name": "pt2", 00:12:28.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.550 "is_configured": true, 00:12:28.550 "data_offset": 2048, 00:12:28.550 "data_size": 63488 00:12:28.550 }, 00:12:28.550 { 00:12:28.550 "name": "pt3", 00:12:28.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.550 "is_configured": true, 00:12:28.550 "data_offset": 2048, 00:12:28.550 "data_size": 63488 00:12:28.550 } 00:12:28.550 ] 00:12:28.550 }' 00:12:28.550 13:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.550 13:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 [2024-10-01 13:46:39.091549] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.115 [2024-10-01 13:46:39.091582] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.115 [2024-10-01 13:46:39.091662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.115 [2024-10-01 13:46:39.091720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.115 [2024-10-01 13:46:39.091751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 [2024-10-01 13:46:39.171503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:29.116 [2024-10-01 13:46:39.171681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.116 [2024-10-01 13:46:39.171735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:29.116 [2024-10-01 13:46:39.171831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.116 [2024-10-01 13:46:39.174296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.116 [2024-10-01 13:46:39.174455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:29.116 [2024-10-01 13:46:39.174612] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:29.116 [2024-10-01 13:46:39.174671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.116 pt2 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.116 "name": "raid_bdev1", 00:12:29.116 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:29.116 "strip_size_kb": 0, 00:12:29.116 "state": "configuring", 00:12:29.116 "raid_level": "raid1", 00:12:29.116 "superblock": true, 00:12:29.116 "num_base_bdevs": 3, 00:12:29.116 "num_base_bdevs_discovered": 1, 00:12:29.116 "num_base_bdevs_operational": 2, 00:12:29.116 "base_bdevs_list": [ 00:12:29.116 { 00:12:29.116 "name": null, 00:12:29.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.116 "is_configured": false, 00:12:29.116 "data_offset": 2048, 00:12:29.116 "data_size": 63488 00:12:29.116 }, 00:12:29.116 { 00:12:29.116 "name": "pt2", 00:12:29.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.116 "is_configured": true, 00:12:29.116 "data_offset": 2048, 00:12:29.116 "data_size": 63488 00:12:29.116 }, 00:12:29.116 { 00:12:29.116 "name": null, 00:12:29.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.116 "is_configured": false, 00:12:29.116 "data_offset": 2048, 00:12:29.116 "data_size": 63488 00:12:29.116 } 00:12:29.116 ] 00:12:29.116 }' 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.116 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.682 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:29.682 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.682 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:29.682 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:29.682 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.682 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.682 [2024-10-01 13:46:39.595508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:29.682 [2024-10-01 13:46:39.595696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.682 [2024-10-01 13:46:39.595755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:29.682 [2024-10-01 13:46:39.595838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.682 [2024-10-01 13:46:39.596316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.682 [2024-10-01 13:46:39.596340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:29.682 [2024-10-01 13:46:39.596445] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:29.682 [2024-10-01 13:46:39.596478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.682 [2024-10-01 13:46:39.596586] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:29.682 [2024-10-01 13:46:39.596598] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.682 [2024-10-01 13:46:39.596845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:29.682 [2024-10-01 13:46:39.596977] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:29.682 [2024-10-01 13:46:39.596985] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:29.682 [2024-10-01 13:46:39.597127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.682 pt3 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.683 "name": "raid_bdev1", 00:12:29.683 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:29.683 "strip_size_kb": 0, 00:12:29.683 "state": "online", 00:12:29.683 "raid_level": "raid1", 00:12:29.683 "superblock": true, 00:12:29.683 "num_base_bdevs": 3, 00:12:29.683 "num_base_bdevs_discovered": 2, 00:12:29.683 "num_base_bdevs_operational": 2, 00:12:29.683 "base_bdevs_list": [ 00:12:29.683 { 00:12:29.683 "name": null, 00:12:29.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.683 "is_configured": false, 00:12:29.683 "data_offset": 2048, 00:12:29.683 "data_size": 63488 00:12:29.683 }, 00:12:29.683 { 00:12:29.683 "name": "pt2", 00:12:29.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.683 "is_configured": true, 00:12:29.683 "data_offset": 2048, 00:12:29.683 "data_size": 63488 00:12:29.683 }, 00:12:29.683 { 00:12:29.683 "name": "pt3", 00:12:29.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.683 "is_configured": true, 00:12:29.683 "data_offset": 2048, 00:12:29.683 "data_size": 63488 00:12:29.683 } 00:12:29.683 ] 00:12:29.683 }' 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.683 13:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.942 [2024-10-01 13:46:40.015234] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.942 [2024-10-01 13:46:40.015407] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.942 [2024-10-01 13:46:40.015507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.942 [2024-10-01 13:46:40.015571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.942 [2024-10-01 13:46:40.015584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.942 [2024-10-01 13:46:40.083153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:29.942 [2024-10-01 13:46:40.083220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.942 [2024-10-01 13:46:40.083246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:29.942 [2024-10-01 13:46:40.083258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.942 [2024-10-01 13:46:40.085785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.942 [2024-10-01 13:46:40.085932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:29.942 [2024-10-01 13:46:40.086035] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:29.942 [2024-10-01 13:46:40.086081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:29.942 [2024-10-01 13:46:40.086205] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:29.942 [2024-10-01 13:46:40.086220] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.942 [2024-10-01 13:46:40.086238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:29.942 [2024-10-01 13:46:40.086302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.942 pt1 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.942 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.942 "name": "raid_bdev1", 00:12:29.942 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:29.942 "strip_size_kb": 0, 00:12:29.942 "state": "configuring", 00:12:29.942 "raid_level": "raid1", 00:12:29.942 "superblock": true, 00:12:29.942 "num_base_bdevs": 3, 00:12:29.943 "num_base_bdevs_discovered": 1, 00:12:29.943 "num_base_bdevs_operational": 2, 00:12:29.943 "base_bdevs_list": [ 00:12:29.943 { 00:12:29.943 "name": null, 00:12:29.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.943 "is_configured": false, 00:12:29.943 "data_offset": 2048, 00:12:29.943 "data_size": 63488 00:12:29.943 }, 00:12:29.943 { 00:12:29.943 "name": "pt2", 00:12:29.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.943 "is_configured": true, 00:12:29.943 "data_offset": 2048, 00:12:29.943 "data_size": 63488 00:12:29.943 }, 00:12:29.943 { 00:12:29.943 "name": null, 00:12:29.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.943 "is_configured": false, 00:12:29.943 "data_offset": 2048, 00:12:29.943 "data_size": 63488 00:12:29.943 } 00:12:29.943 ] 00:12:29.943 }' 00:12:29.943 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.943 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.510 [2024-10-01 13:46:40.538552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.510 [2024-10-01 13:46:40.538740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.510 [2024-10-01 13:46:40.538801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:30.510 [2024-10-01 13:46:40.538881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.510 [2024-10-01 13:46:40.539386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.510 [2024-10-01 13:46:40.539511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.510 [2024-10-01 13:46:40.539614] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:30.510 [2024-10-01 13:46:40.539664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.510 [2024-10-01 13:46:40.539812] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:30.510 [2024-10-01 13:46:40.539822] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.510 [2024-10-01 13:46:40.540100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:30.510 [2024-10-01 13:46:40.540250] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:30.510 [2024-10-01 13:46:40.540263] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:30.510 [2024-10-01 13:46:40.540421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.510 pt3 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.510 "name": "raid_bdev1", 00:12:30.510 "uuid": "0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6", 00:12:30.510 "strip_size_kb": 0, 00:12:30.510 "state": "online", 00:12:30.510 "raid_level": "raid1", 00:12:30.510 "superblock": true, 00:12:30.510 "num_base_bdevs": 3, 00:12:30.510 "num_base_bdevs_discovered": 2, 00:12:30.510 "num_base_bdevs_operational": 2, 00:12:30.510 "base_bdevs_list": [ 00:12:30.510 { 00:12:30.510 "name": null, 00:12:30.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.510 "is_configured": false, 00:12:30.510 "data_offset": 2048, 00:12:30.510 "data_size": 63488 00:12:30.510 }, 00:12:30.510 { 00:12:30.510 "name": "pt2", 00:12:30.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.510 "is_configured": true, 00:12:30.510 "data_offset": 2048, 00:12:30.510 "data_size": 63488 00:12:30.510 }, 00:12:30.510 { 00:12:30.510 "name": "pt3", 00:12:30.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.510 "is_configured": true, 00:12:30.510 "data_offset": 2048, 00:12:30.510 "data_size": 63488 00:12:30.510 } 00:12:30.510 ] 00:12:30.510 }' 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.510 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.078 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:31.078 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.078 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.078 13:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:31.078 13:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.078 [2024-10-01 13:46:41.018647] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6 '!=' 0adaf81a-2ad0-4744-83d9-6b5e26ffc0c6 ']' 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68534 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68534 ']' 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68534 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68534 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.078 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68534' 00:12:31.079 killing process with pid 68534 00:12:31.079 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68534 00:12:31.079 [2024-10-01 13:46:41.102764] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.079 [2024-10-01 13:46:41.102873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.079 [2024-10-01 13:46:41.102934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.079 13:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68534 00:12:31.079 [2024-10-01 13:46:41.102948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:31.338 [2024-10-01 13:46:41.408236] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.712 13:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:32.712 00:12:32.712 real 0m7.665s 00:12:32.712 user 0m11.863s 00:12:32.712 sys 0m1.451s 00:12:32.712 13:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.712 13:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.712 ************************************ 00:12:32.712 END TEST raid_superblock_test 00:12:32.712 ************************************ 00:12:32.712 13:46:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:32.712 13:46:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:32.712 13:46:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.712 13:46:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.712 ************************************ 00:12:32.712 START TEST raid_read_error_test 00:12:32.712 ************************************ 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.712 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MYD3GT8ckg 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68980 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68980 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 68980 ']' 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.713 13:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.713 [2024-10-01 13:46:42.857958] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:32.713 [2024-10-01 13:46:42.858086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68980 ] 00:12:32.971 [2024-10-01 13:46:43.027666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.228 [2024-10-01 13:46:43.238809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.485 [2024-10-01 13:46:43.448649] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.485 [2024-10-01 13:46:43.448699] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 BaseBdev1_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 true 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 [2024-10-01 13:46:43.759321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.743 [2024-10-01 13:46:43.759525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.743 [2024-10-01 13:46:43.759584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:33.743 [2024-10-01 13:46:43.759671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.743 [2024-10-01 13:46:43.762054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.743 [2024-10-01 13:46:43.762209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.743 BaseBdev1 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 BaseBdev2_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 true 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 [2024-10-01 13:46:43.837514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.743 [2024-10-01 13:46:43.837694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.743 [2024-10-01 13:46:43.837725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:33.743 [2024-10-01 13:46:43.837739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.743 [2024-10-01 13:46:43.840091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.743 [2024-10-01 13:46:43.840136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.743 BaseBdev2 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 BaseBdev3_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 true 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 [2024-10-01 13:46:43.906330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:33.743 [2024-10-01 13:46:43.906508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.743 [2024-10-01 13:46:43.906537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:33.743 [2024-10-01 13:46:43.906552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.743 [2024-10-01 13:46:43.909016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.743 [2024-10-01 13:46:43.909063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.743 BaseBdev3 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 [2024-10-01 13:46:43.918410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.743 [2024-10-01 13:46:43.920580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.743 [2024-10-01 13:46:43.920761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.743 [2024-10-01 13:46:43.920972] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:33.743 [2024-10-01 13:46:43.920986] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.743 [2024-10-01 13:46:43.921261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:33.743 [2024-10-01 13:46:43.921447] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:33.743 [2024-10-01 13:46:43.921463] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:33.743 [2024-10-01 13:46:43.921620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.743 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.001 "name": "raid_bdev1", 00:12:34.001 "uuid": "c691acc2-6e66-4333-a5c9-fbe2db6d90c7", 00:12:34.001 "strip_size_kb": 0, 00:12:34.001 "state": "online", 00:12:34.001 "raid_level": "raid1", 00:12:34.001 "superblock": true, 00:12:34.001 "num_base_bdevs": 3, 00:12:34.001 "num_base_bdevs_discovered": 3, 00:12:34.001 "num_base_bdevs_operational": 3, 00:12:34.001 "base_bdevs_list": [ 00:12:34.001 { 00:12:34.001 "name": "BaseBdev1", 00:12:34.001 "uuid": "13e317e5-75e3-5068-95c4-e520bc958e55", 00:12:34.001 "is_configured": true, 00:12:34.001 "data_offset": 2048, 00:12:34.001 "data_size": 63488 00:12:34.001 }, 00:12:34.001 { 00:12:34.001 "name": "BaseBdev2", 00:12:34.001 "uuid": "7e404bbb-7672-5645-80bd-6b9a047a424c", 00:12:34.001 "is_configured": true, 00:12:34.001 "data_offset": 2048, 00:12:34.001 "data_size": 63488 00:12:34.001 }, 00:12:34.001 { 00:12:34.001 "name": "BaseBdev3", 00:12:34.001 "uuid": "c9768c12-884a-56f5-a5f5-ed11ad2b77b9", 00:12:34.001 "is_configured": true, 00:12:34.001 "data_offset": 2048, 00:12:34.001 "data_size": 63488 00:12:34.001 } 00:12:34.001 ] 00:12:34.001 }' 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.001 13:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.260 13:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:34.260 13:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.260 [2024-10-01 13:46:44.391114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.195 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.195 "name": "raid_bdev1", 00:12:35.195 "uuid": "c691acc2-6e66-4333-a5c9-fbe2db6d90c7", 00:12:35.195 "strip_size_kb": 0, 00:12:35.195 "state": "online", 00:12:35.195 "raid_level": "raid1", 00:12:35.195 "superblock": true, 00:12:35.195 "num_base_bdevs": 3, 00:12:35.195 "num_base_bdevs_discovered": 3, 00:12:35.195 "num_base_bdevs_operational": 3, 00:12:35.195 "base_bdevs_list": [ 00:12:35.195 { 00:12:35.196 "name": "BaseBdev1", 00:12:35.196 "uuid": "13e317e5-75e3-5068-95c4-e520bc958e55", 00:12:35.196 "is_configured": true, 00:12:35.196 "data_offset": 2048, 00:12:35.196 "data_size": 63488 00:12:35.196 }, 00:12:35.196 { 00:12:35.196 "name": "BaseBdev2", 00:12:35.196 "uuid": "7e404bbb-7672-5645-80bd-6b9a047a424c", 00:12:35.196 "is_configured": true, 00:12:35.196 "data_offset": 2048, 00:12:35.196 "data_size": 63488 00:12:35.196 }, 00:12:35.196 { 00:12:35.196 "name": "BaseBdev3", 00:12:35.196 "uuid": "c9768c12-884a-56f5-a5f5-ed11ad2b77b9", 00:12:35.196 "is_configured": true, 00:12:35.196 "data_offset": 2048, 00:12:35.196 "data_size": 63488 00:12:35.196 } 00:12:35.196 ] 00:12:35.196 }' 00:12:35.196 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.196 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.761 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.761 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 [2024-10-01 13:46:45.758943] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.761 [2024-10-01 13:46:45.758982] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.761 [2024-10-01 13:46:45.761621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.761 [2024-10-01 13:46:45.761675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.761 [2024-10-01 13:46:45.761777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.761 [2024-10-01 13:46:45.761799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:35.761 { 00:12:35.761 "results": [ 00:12:35.761 { 00:12:35.761 "job": "raid_bdev1", 00:12:35.761 "core_mask": "0x1", 00:12:35.761 "workload": "randrw", 00:12:35.761 "percentage": 50, 00:12:35.761 "status": "finished", 00:12:35.761 "queue_depth": 1, 00:12:35.762 "io_size": 131072, 00:12:35.762 "runtime": 1.367771, 00:12:35.762 "iops": 13835.649388676906, 00:12:35.762 "mibps": 1729.4561735846132, 00:12:35.762 "io_failed": 0, 00:12:35.762 "io_timeout": 0, 00:12:35.762 "avg_latency_us": 69.68475992322705, 00:12:35.762 "min_latency_us": 24.160642570281123, 00:12:35.762 "max_latency_us": 1460.7421686746989 00:12:35.762 } 00:12:35.762 ], 00:12:35.762 "core_count": 1 00:12:35.762 } 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68980 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 68980 ']' 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 68980 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68980 00:12:35.762 killing process with pid 68980 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68980' 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 68980 00:12:35.762 [2024-10-01 13:46:45.799063] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.762 13:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 68980 00:12:36.018 [2024-10-01 13:46:46.031299] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MYD3GT8ckg 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:37.388 00:12:37.388 real 0m4.637s 00:12:37.388 user 0m5.375s 00:12:37.388 sys 0m0.623s 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.388 ************************************ 00:12:37.388 END TEST raid_read_error_test 00:12:37.388 ************************************ 00:12:37.388 13:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.388 13:46:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:37.388 13:46:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:37.388 13:46:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.388 13:46:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.388 ************************************ 00:12:37.388 START TEST raid_write_error_test 00:12:37.389 ************************************ 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GfTwV3Gj77 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69124 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69124 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69124 ']' 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.389 13:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.389 [2024-10-01 13:46:47.553039] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:37.389 [2024-10-01 13:46:47.553175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69124 ] 00:12:37.647 [2024-10-01 13:46:47.712540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.904 [2024-10-01 13:46:47.925521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.162 [2024-10-01 13:46:48.123347] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.162 [2024-10-01 13:46:48.123433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 BaseBdev1_malloc 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 true 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 [2024-10-01 13:46:48.469141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.444 [2024-10-01 13:46:48.469208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.444 [2024-10-01 13:46:48.469230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.444 [2024-10-01 13:46:48.469245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.444 [2024-10-01 13:46:48.471763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.444 [2024-10-01 13:46:48.471809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.444 BaseBdev1 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 BaseBdev2_malloc 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 true 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 [2024-10-01 13:46:48.548533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.444 [2024-10-01 13:46:48.548595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.444 [2024-10-01 13:46:48.548617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.444 [2024-10-01 13:46:48.548632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.444 [2024-10-01 13:46:48.551120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.444 [2024-10-01 13:46:48.551166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.444 BaseBdev2 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 BaseBdev3_malloc 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.445 true 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.445 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.722 [2024-10-01 13:46:48.618641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:38.722 [2024-10-01 13:46:48.618709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.722 [2024-10-01 13:46:48.618732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:38.722 [2024-10-01 13:46:48.618747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.722 [2024-10-01 13:46:48.621343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.722 [2024-10-01 13:46:48.621389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.722 BaseBdev3 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.722 [2024-10-01 13:46:48.630695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.722 [2024-10-01 13:46:48.632939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.722 [2024-10-01 13:46:48.633020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.722 [2024-10-01 13:46:48.633225] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:38.722 [2024-10-01 13:46:48.633238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.722 [2024-10-01 13:46:48.633569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:38.722 [2024-10-01 13:46:48.633740] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:38.722 [2024-10-01 13:46:48.633759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:38.722 [2024-10-01 13:46:48.633932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.722 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.723 "name": "raid_bdev1", 00:12:38.723 "uuid": "e821fa27-d83f-4146-82dd-94cc7f052717", 00:12:38.723 "strip_size_kb": 0, 00:12:38.723 "state": "online", 00:12:38.723 "raid_level": "raid1", 00:12:38.723 "superblock": true, 00:12:38.723 "num_base_bdevs": 3, 00:12:38.723 "num_base_bdevs_discovered": 3, 00:12:38.723 "num_base_bdevs_operational": 3, 00:12:38.723 "base_bdevs_list": [ 00:12:38.723 { 00:12:38.723 "name": "BaseBdev1", 00:12:38.723 "uuid": "dc68a715-069d-5306-83e1-fd2afbc42d5e", 00:12:38.723 "is_configured": true, 00:12:38.723 "data_offset": 2048, 00:12:38.723 "data_size": 63488 00:12:38.723 }, 00:12:38.723 { 00:12:38.723 "name": "BaseBdev2", 00:12:38.723 "uuid": "e7343cb4-0a88-53c4-9442-ab00532dc7b2", 00:12:38.723 "is_configured": true, 00:12:38.723 "data_offset": 2048, 00:12:38.723 "data_size": 63488 00:12:38.723 }, 00:12:38.723 { 00:12:38.723 "name": "BaseBdev3", 00:12:38.723 "uuid": "e3aa04bc-db70-5c13-b8a8-10e702bb4884", 00:12:38.723 "is_configured": true, 00:12:38.723 "data_offset": 2048, 00:12:38.723 "data_size": 63488 00:12:38.723 } 00:12:38.723 ] 00:12:38.723 }' 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.723 13:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.981 13:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:38.981 13:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:38.981 [2024-10-01 13:46:49.151253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.915 [2024-10-01 13:46:50.062848] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:39.915 [2024-10-01 13:46:50.062908] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.915 [2024-10-01 13:46:50.063124] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.915 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.174 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.174 "name": "raid_bdev1", 00:12:40.174 "uuid": "e821fa27-d83f-4146-82dd-94cc7f052717", 00:12:40.174 "strip_size_kb": 0, 00:12:40.174 "state": "online", 00:12:40.174 "raid_level": "raid1", 00:12:40.174 "superblock": true, 00:12:40.174 "num_base_bdevs": 3, 00:12:40.174 "num_base_bdevs_discovered": 2, 00:12:40.174 "num_base_bdevs_operational": 2, 00:12:40.174 "base_bdevs_list": [ 00:12:40.174 { 00:12:40.174 "name": null, 00:12:40.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.174 "is_configured": false, 00:12:40.174 "data_offset": 0, 00:12:40.174 "data_size": 63488 00:12:40.174 }, 00:12:40.174 { 00:12:40.174 "name": "BaseBdev2", 00:12:40.174 "uuid": "e7343cb4-0a88-53c4-9442-ab00532dc7b2", 00:12:40.174 "is_configured": true, 00:12:40.174 "data_offset": 2048, 00:12:40.174 "data_size": 63488 00:12:40.174 }, 00:12:40.174 { 00:12:40.174 "name": "BaseBdev3", 00:12:40.174 "uuid": "e3aa04bc-db70-5c13-b8a8-10e702bb4884", 00:12:40.174 "is_configured": true, 00:12:40.174 "data_offset": 2048, 00:12:40.174 "data_size": 63488 00:12:40.174 } 00:12:40.174 ] 00:12:40.174 }' 00:12:40.174 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.174 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.432 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.433 [2024-10-01 13:46:50.484842] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.433 [2024-10-01 13:46:50.484891] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.433 [2024-10-01 13:46:50.487457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.433 [2024-10-01 13:46:50.487504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.433 [2024-10-01 13:46:50.487584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.433 [2024-10-01 13:46:50.487600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:40.433 { 00:12:40.433 "results": [ 00:12:40.433 { 00:12:40.433 "job": "raid_bdev1", 00:12:40.433 "core_mask": "0x1", 00:12:40.433 "workload": "randrw", 00:12:40.433 "percentage": 50, 00:12:40.433 "status": "finished", 00:12:40.433 "queue_depth": 1, 00:12:40.433 "io_size": 131072, 00:12:40.433 "runtime": 1.333616, 00:12:40.433 "iops": 15111.54635217334, 00:12:40.433 "mibps": 1888.9432940216675, 00:12:40.433 "io_failed": 0, 00:12:40.433 "io_timeout": 0, 00:12:40.433 "avg_latency_us": 63.55318456378983, 00:12:40.433 "min_latency_us": 24.057831325301205, 00:12:40.433 "max_latency_us": 1802.8979919678716 00:12:40.433 } 00:12:40.433 ], 00:12:40.433 "core_count": 1 00:12:40.433 } 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69124 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69124 ']' 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69124 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69124 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.433 killing process with pid 69124 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69124' 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69124 00:12:40.433 [2024-10-01 13:46:50.531009] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.433 13:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69124 00:12:40.692 [2024-10-01 13:46:50.762108] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GfTwV3Gj77 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:42.068 00:12:42.068 real 0m4.661s 00:12:42.068 user 0m5.438s 00:12:42.068 sys 0m0.624s 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.068 13:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.068 ************************************ 00:12:42.068 END TEST raid_write_error_test 00:12:42.068 ************************************ 00:12:42.068 13:46:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:42.068 13:46:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:42.068 13:46:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:42.068 13:46:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:42.068 13:46:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.068 13:46:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.068 ************************************ 00:12:42.068 START TEST raid_state_function_test 00:12:42.068 ************************************ 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:42.068 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69269 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69269' 00:12:42.069 Process raid pid: 69269 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69269 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69269 ']' 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.069 13:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.328 [2024-10-01 13:46:52.299393] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:42.328 [2024-10-01 13:46:52.299534] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.328 [2024-10-01 13:46:52.465986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.587 [2024-10-01 13:46:52.686065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.845 [2024-10-01 13:46:52.907302] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.845 [2024-10-01 13:46:52.907341] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.104 [2024-10-01 13:46:53.138224] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.104 [2024-10-01 13:46:53.138280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.104 [2024-10-01 13:46:53.138296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.104 [2024-10-01 13:46:53.138309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.104 [2024-10-01 13:46:53.138317] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.104 [2024-10-01 13:46:53.138329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.104 [2024-10-01 13:46:53.138337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.104 [2024-10-01 13:46:53.138350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.104 "name": "Existed_Raid", 00:12:43.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.104 "strip_size_kb": 64, 00:12:43.104 "state": "configuring", 00:12:43.104 "raid_level": "raid0", 00:12:43.104 "superblock": false, 00:12:43.104 "num_base_bdevs": 4, 00:12:43.104 "num_base_bdevs_discovered": 0, 00:12:43.104 "num_base_bdevs_operational": 4, 00:12:43.104 "base_bdevs_list": [ 00:12:43.104 { 00:12:43.104 "name": "BaseBdev1", 00:12:43.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.104 "is_configured": false, 00:12:43.104 "data_offset": 0, 00:12:43.104 "data_size": 0 00:12:43.104 }, 00:12:43.104 { 00:12:43.104 "name": "BaseBdev2", 00:12:43.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.104 "is_configured": false, 00:12:43.104 "data_offset": 0, 00:12:43.104 "data_size": 0 00:12:43.104 }, 00:12:43.104 { 00:12:43.104 "name": "BaseBdev3", 00:12:43.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.104 "is_configured": false, 00:12:43.104 "data_offset": 0, 00:12:43.104 "data_size": 0 00:12:43.104 }, 00:12:43.104 { 00:12:43.104 "name": "BaseBdev4", 00:12:43.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.104 "is_configured": false, 00:12:43.104 "data_offset": 0, 00:12:43.104 "data_size": 0 00:12:43.104 } 00:12:43.104 ] 00:12:43.104 }' 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.104 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.362 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.362 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.362 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.362 [2024-10-01 13:46:53.545573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.362 [2024-10-01 13:46:53.545623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:43.362 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.362 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.362 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.362 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.621 [2024-10-01 13:46:53.557585] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.621 [2024-10-01 13:46:53.557635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.621 [2024-10-01 13:46:53.557646] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.621 [2024-10-01 13:46:53.557659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.621 [2024-10-01 13:46:53.557666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.621 [2024-10-01 13:46:53.557678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.621 [2024-10-01 13:46:53.557686] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.621 [2024-10-01 13:46:53.557697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.621 [2024-10-01 13:46:53.622073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.621 BaseBdev1 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.621 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.621 [ 00:12:43.621 { 00:12:43.621 "name": "BaseBdev1", 00:12:43.621 "aliases": [ 00:12:43.621 "dc56255d-7ebb-4060-b38c-6da13595764f" 00:12:43.621 ], 00:12:43.621 "product_name": "Malloc disk", 00:12:43.621 "block_size": 512, 00:12:43.621 "num_blocks": 65536, 00:12:43.621 "uuid": "dc56255d-7ebb-4060-b38c-6da13595764f", 00:12:43.621 "assigned_rate_limits": { 00:12:43.621 "rw_ios_per_sec": 0, 00:12:43.621 "rw_mbytes_per_sec": 0, 00:12:43.621 "r_mbytes_per_sec": 0, 00:12:43.621 "w_mbytes_per_sec": 0 00:12:43.621 }, 00:12:43.622 "claimed": true, 00:12:43.622 "claim_type": "exclusive_write", 00:12:43.622 "zoned": false, 00:12:43.622 "supported_io_types": { 00:12:43.622 "read": true, 00:12:43.622 "write": true, 00:12:43.622 "unmap": true, 00:12:43.622 "flush": true, 00:12:43.622 "reset": true, 00:12:43.622 "nvme_admin": false, 00:12:43.622 "nvme_io": false, 00:12:43.622 "nvme_io_md": false, 00:12:43.622 "write_zeroes": true, 00:12:43.622 "zcopy": true, 00:12:43.622 "get_zone_info": false, 00:12:43.622 "zone_management": false, 00:12:43.622 "zone_append": false, 00:12:43.622 "compare": false, 00:12:43.622 "compare_and_write": false, 00:12:43.622 "abort": true, 00:12:43.622 "seek_hole": false, 00:12:43.622 "seek_data": false, 00:12:43.622 "copy": true, 00:12:43.622 "nvme_iov_md": false 00:12:43.622 }, 00:12:43.622 "memory_domains": [ 00:12:43.622 { 00:12:43.622 "dma_device_id": "system", 00:12:43.622 "dma_device_type": 1 00:12:43.622 }, 00:12:43.622 { 00:12:43.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.622 "dma_device_type": 2 00:12:43.622 } 00:12:43.622 ], 00:12:43.622 "driver_specific": {} 00:12:43.622 } 00:12:43.622 ] 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.622 "name": "Existed_Raid", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "strip_size_kb": 64, 00:12:43.622 "state": "configuring", 00:12:43.622 "raid_level": "raid0", 00:12:43.622 "superblock": false, 00:12:43.622 "num_base_bdevs": 4, 00:12:43.622 "num_base_bdevs_discovered": 1, 00:12:43.622 "num_base_bdevs_operational": 4, 00:12:43.622 "base_bdevs_list": [ 00:12:43.622 { 00:12:43.622 "name": "BaseBdev1", 00:12:43.622 "uuid": "dc56255d-7ebb-4060-b38c-6da13595764f", 00:12:43.622 "is_configured": true, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 65536 00:12:43.622 }, 00:12:43.622 { 00:12:43.622 "name": "BaseBdev2", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "is_configured": false, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 0 00:12:43.622 }, 00:12:43.622 { 00:12:43.622 "name": "BaseBdev3", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "is_configured": false, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 0 00:12:43.622 }, 00:12:43.622 { 00:12:43.622 "name": "BaseBdev4", 00:12:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.622 "is_configured": false, 00:12:43.622 "data_offset": 0, 00:12:43.622 "data_size": 0 00:12:43.622 } 00:12:43.622 ] 00:12:43.622 }' 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.622 13:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.188 [2024-10-01 13:46:54.113465] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.188 [2024-10-01 13:46:54.113527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.188 [2024-10-01 13:46:54.125510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.188 [2024-10-01 13:46:54.127682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.188 [2024-10-01 13:46:54.127738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.188 [2024-10-01 13:46:54.127750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.188 [2024-10-01 13:46:54.127768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.188 [2024-10-01 13:46:54.127777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.188 [2024-10-01 13:46:54.127789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.188 "name": "Existed_Raid", 00:12:44.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.188 "strip_size_kb": 64, 00:12:44.188 "state": "configuring", 00:12:44.188 "raid_level": "raid0", 00:12:44.188 "superblock": false, 00:12:44.188 "num_base_bdevs": 4, 00:12:44.188 "num_base_bdevs_discovered": 1, 00:12:44.188 "num_base_bdevs_operational": 4, 00:12:44.188 "base_bdevs_list": [ 00:12:44.188 { 00:12:44.188 "name": "BaseBdev1", 00:12:44.188 "uuid": "dc56255d-7ebb-4060-b38c-6da13595764f", 00:12:44.188 "is_configured": true, 00:12:44.188 "data_offset": 0, 00:12:44.188 "data_size": 65536 00:12:44.188 }, 00:12:44.188 { 00:12:44.188 "name": "BaseBdev2", 00:12:44.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.188 "is_configured": false, 00:12:44.188 "data_offset": 0, 00:12:44.188 "data_size": 0 00:12:44.188 }, 00:12:44.188 { 00:12:44.188 "name": "BaseBdev3", 00:12:44.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.188 "is_configured": false, 00:12:44.188 "data_offset": 0, 00:12:44.188 "data_size": 0 00:12:44.188 }, 00:12:44.188 { 00:12:44.188 "name": "BaseBdev4", 00:12:44.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.188 "is_configured": false, 00:12:44.188 "data_offset": 0, 00:12:44.188 "data_size": 0 00:12:44.188 } 00:12:44.188 ] 00:12:44.188 }' 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.188 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.446 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.446 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.446 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.446 [2024-10-01 13:46:54.596060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.446 BaseBdev2 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.447 [ 00:12:44.447 { 00:12:44.447 "name": "BaseBdev2", 00:12:44.447 "aliases": [ 00:12:44.447 "259e6c1d-ee48-4456-b252-1fda1e20772c" 00:12:44.447 ], 00:12:44.447 "product_name": "Malloc disk", 00:12:44.447 "block_size": 512, 00:12:44.447 "num_blocks": 65536, 00:12:44.447 "uuid": "259e6c1d-ee48-4456-b252-1fda1e20772c", 00:12:44.447 "assigned_rate_limits": { 00:12:44.447 "rw_ios_per_sec": 0, 00:12:44.447 "rw_mbytes_per_sec": 0, 00:12:44.447 "r_mbytes_per_sec": 0, 00:12:44.447 "w_mbytes_per_sec": 0 00:12:44.447 }, 00:12:44.447 "claimed": true, 00:12:44.447 "claim_type": "exclusive_write", 00:12:44.447 "zoned": false, 00:12:44.447 "supported_io_types": { 00:12:44.447 "read": true, 00:12:44.447 "write": true, 00:12:44.447 "unmap": true, 00:12:44.447 "flush": true, 00:12:44.447 "reset": true, 00:12:44.447 "nvme_admin": false, 00:12:44.447 "nvme_io": false, 00:12:44.447 "nvme_io_md": false, 00:12:44.447 "write_zeroes": true, 00:12:44.447 "zcopy": true, 00:12:44.447 "get_zone_info": false, 00:12:44.447 "zone_management": false, 00:12:44.447 "zone_append": false, 00:12:44.447 "compare": false, 00:12:44.447 "compare_and_write": false, 00:12:44.447 "abort": true, 00:12:44.447 "seek_hole": false, 00:12:44.447 "seek_data": false, 00:12:44.447 "copy": true, 00:12:44.447 "nvme_iov_md": false 00:12:44.447 }, 00:12:44.447 "memory_domains": [ 00:12:44.447 { 00:12:44.447 "dma_device_id": "system", 00:12:44.447 "dma_device_type": 1 00:12:44.447 }, 00:12:44.447 { 00:12:44.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.447 "dma_device_type": 2 00:12:44.447 } 00:12:44.447 ], 00:12:44.447 "driver_specific": {} 00:12:44.447 } 00:12:44.447 ] 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.447 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.705 "name": "Existed_Raid", 00:12:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.705 "strip_size_kb": 64, 00:12:44.705 "state": "configuring", 00:12:44.705 "raid_level": "raid0", 00:12:44.705 "superblock": false, 00:12:44.705 "num_base_bdevs": 4, 00:12:44.705 "num_base_bdevs_discovered": 2, 00:12:44.705 "num_base_bdevs_operational": 4, 00:12:44.705 "base_bdevs_list": [ 00:12:44.705 { 00:12:44.705 "name": "BaseBdev1", 00:12:44.705 "uuid": "dc56255d-7ebb-4060-b38c-6da13595764f", 00:12:44.705 "is_configured": true, 00:12:44.705 "data_offset": 0, 00:12:44.705 "data_size": 65536 00:12:44.705 }, 00:12:44.705 { 00:12:44.705 "name": "BaseBdev2", 00:12:44.705 "uuid": "259e6c1d-ee48-4456-b252-1fda1e20772c", 00:12:44.705 "is_configured": true, 00:12:44.705 "data_offset": 0, 00:12:44.705 "data_size": 65536 00:12:44.705 }, 00:12:44.705 { 00:12:44.705 "name": "BaseBdev3", 00:12:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.705 "is_configured": false, 00:12:44.705 "data_offset": 0, 00:12:44.705 "data_size": 0 00:12:44.705 }, 00:12:44.705 { 00:12:44.705 "name": "BaseBdev4", 00:12:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.705 "is_configured": false, 00:12:44.705 "data_offset": 0, 00:12:44.705 "data_size": 0 00:12:44.705 } 00:12:44.705 ] 00:12:44.705 }' 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.705 13:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.964 [2024-10-01 13:46:55.074468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.964 BaseBdev3 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.964 [ 00:12:44.964 { 00:12:44.964 "name": "BaseBdev3", 00:12:44.964 "aliases": [ 00:12:44.964 "1f65cf8e-3372-46ce-aeca-33f0c78ace1e" 00:12:44.964 ], 00:12:44.964 "product_name": "Malloc disk", 00:12:44.964 "block_size": 512, 00:12:44.964 "num_blocks": 65536, 00:12:44.964 "uuid": "1f65cf8e-3372-46ce-aeca-33f0c78ace1e", 00:12:44.964 "assigned_rate_limits": { 00:12:44.964 "rw_ios_per_sec": 0, 00:12:44.964 "rw_mbytes_per_sec": 0, 00:12:44.964 "r_mbytes_per_sec": 0, 00:12:44.964 "w_mbytes_per_sec": 0 00:12:44.964 }, 00:12:44.964 "claimed": true, 00:12:44.964 "claim_type": "exclusive_write", 00:12:44.964 "zoned": false, 00:12:44.964 "supported_io_types": { 00:12:44.964 "read": true, 00:12:44.964 "write": true, 00:12:44.964 "unmap": true, 00:12:44.964 "flush": true, 00:12:44.964 "reset": true, 00:12:44.964 "nvme_admin": false, 00:12:44.964 "nvme_io": false, 00:12:44.964 "nvme_io_md": false, 00:12:44.964 "write_zeroes": true, 00:12:44.964 "zcopy": true, 00:12:44.964 "get_zone_info": false, 00:12:44.964 "zone_management": false, 00:12:44.964 "zone_append": false, 00:12:44.964 "compare": false, 00:12:44.964 "compare_and_write": false, 00:12:44.964 "abort": true, 00:12:44.964 "seek_hole": false, 00:12:44.964 "seek_data": false, 00:12:44.964 "copy": true, 00:12:44.964 "nvme_iov_md": false 00:12:44.964 }, 00:12:44.964 "memory_domains": [ 00:12:44.964 { 00:12:44.964 "dma_device_id": "system", 00:12:44.964 "dma_device_type": 1 00:12:44.964 }, 00:12:44.964 { 00:12:44.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.964 "dma_device_type": 2 00:12:44.964 } 00:12:44.964 ], 00:12:44.964 "driver_specific": {} 00:12:44.964 } 00:12:44.964 ] 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.964 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.222 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.223 "name": "Existed_Raid", 00:12:45.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.223 "strip_size_kb": 64, 00:12:45.223 "state": "configuring", 00:12:45.223 "raid_level": "raid0", 00:12:45.223 "superblock": false, 00:12:45.223 "num_base_bdevs": 4, 00:12:45.223 "num_base_bdevs_discovered": 3, 00:12:45.223 "num_base_bdevs_operational": 4, 00:12:45.223 "base_bdevs_list": [ 00:12:45.223 { 00:12:45.223 "name": "BaseBdev1", 00:12:45.223 "uuid": "dc56255d-7ebb-4060-b38c-6da13595764f", 00:12:45.223 "is_configured": true, 00:12:45.223 "data_offset": 0, 00:12:45.223 "data_size": 65536 00:12:45.223 }, 00:12:45.223 { 00:12:45.223 "name": "BaseBdev2", 00:12:45.223 "uuid": "259e6c1d-ee48-4456-b252-1fda1e20772c", 00:12:45.223 "is_configured": true, 00:12:45.223 "data_offset": 0, 00:12:45.223 "data_size": 65536 00:12:45.223 }, 00:12:45.223 { 00:12:45.223 "name": "BaseBdev3", 00:12:45.223 "uuid": "1f65cf8e-3372-46ce-aeca-33f0c78ace1e", 00:12:45.223 "is_configured": true, 00:12:45.223 "data_offset": 0, 00:12:45.223 "data_size": 65536 00:12:45.223 }, 00:12:45.223 { 00:12:45.223 "name": "BaseBdev4", 00:12:45.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.223 "is_configured": false, 00:12:45.223 "data_offset": 0, 00:12:45.223 "data_size": 0 00:12:45.223 } 00:12:45.223 ] 00:12:45.223 }' 00:12:45.223 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.223 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.494 [2024-10-01 13:46:55.576811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:45.494 [2024-10-01 13:46:55.576864] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:45.494 [2024-10-01 13:46:55.576881] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:45.494 [2024-10-01 13:46:55.577164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:45.494 [2024-10-01 13:46:55.577327] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:45.494 [2024-10-01 13:46:55.577344] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:45.494 [2024-10-01 13:46:55.577595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.494 BaseBdev4 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.494 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.494 [ 00:12:45.494 { 00:12:45.494 "name": "BaseBdev4", 00:12:45.494 "aliases": [ 00:12:45.494 "09b4c8e1-c62b-451a-84b3-28f3b829a741" 00:12:45.494 ], 00:12:45.494 "product_name": "Malloc disk", 00:12:45.494 "block_size": 512, 00:12:45.494 "num_blocks": 65536, 00:12:45.494 "uuid": "09b4c8e1-c62b-451a-84b3-28f3b829a741", 00:12:45.494 "assigned_rate_limits": { 00:12:45.494 "rw_ios_per_sec": 0, 00:12:45.495 "rw_mbytes_per_sec": 0, 00:12:45.495 "r_mbytes_per_sec": 0, 00:12:45.495 "w_mbytes_per_sec": 0 00:12:45.495 }, 00:12:45.495 "claimed": true, 00:12:45.495 "claim_type": "exclusive_write", 00:12:45.495 "zoned": false, 00:12:45.495 "supported_io_types": { 00:12:45.495 "read": true, 00:12:45.495 "write": true, 00:12:45.495 "unmap": true, 00:12:45.495 "flush": true, 00:12:45.495 "reset": true, 00:12:45.495 "nvme_admin": false, 00:12:45.495 "nvme_io": false, 00:12:45.495 "nvme_io_md": false, 00:12:45.495 "write_zeroes": true, 00:12:45.495 "zcopy": true, 00:12:45.495 "get_zone_info": false, 00:12:45.495 "zone_management": false, 00:12:45.495 "zone_append": false, 00:12:45.495 "compare": false, 00:12:45.495 "compare_and_write": false, 00:12:45.495 "abort": true, 00:12:45.495 "seek_hole": false, 00:12:45.495 "seek_data": false, 00:12:45.495 "copy": true, 00:12:45.495 "nvme_iov_md": false 00:12:45.495 }, 00:12:45.495 "memory_domains": [ 00:12:45.495 { 00:12:45.495 "dma_device_id": "system", 00:12:45.495 "dma_device_type": 1 00:12:45.495 }, 00:12:45.495 { 00:12:45.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.495 "dma_device_type": 2 00:12:45.495 } 00:12:45.495 ], 00:12:45.495 "driver_specific": {} 00:12:45.495 } 00:12:45.495 ] 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.495 "name": "Existed_Raid", 00:12:45.495 "uuid": "121ebfa5-000f-4d71-bc47-183c15e960a7", 00:12:45.495 "strip_size_kb": 64, 00:12:45.495 "state": "online", 00:12:45.495 "raid_level": "raid0", 00:12:45.495 "superblock": false, 00:12:45.495 "num_base_bdevs": 4, 00:12:45.495 "num_base_bdevs_discovered": 4, 00:12:45.495 "num_base_bdevs_operational": 4, 00:12:45.495 "base_bdevs_list": [ 00:12:45.495 { 00:12:45.495 "name": "BaseBdev1", 00:12:45.495 "uuid": "dc56255d-7ebb-4060-b38c-6da13595764f", 00:12:45.495 "is_configured": true, 00:12:45.495 "data_offset": 0, 00:12:45.495 "data_size": 65536 00:12:45.495 }, 00:12:45.495 { 00:12:45.495 "name": "BaseBdev2", 00:12:45.495 "uuid": "259e6c1d-ee48-4456-b252-1fda1e20772c", 00:12:45.495 "is_configured": true, 00:12:45.495 "data_offset": 0, 00:12:45.495 "data_size": 65536 00:12:45.495 }, 00:12:45.495 { 00:12:45.495 "name": "BaseBdev3", 00:12:45.495 "uuid": "1f65cf8e-3372-46ce-aeca-33f0c78ace1e", 00:12:45.495 "is_configured": true, 00:12:45.495 "data_offset": 0, 00:12:45.495 "data_size": 65536 00:12:45.495 }, 00:12:45.495 { 00:12:45.495 "name": "BaseBdev4", 00:12:45.495 "uuid": "09b4c8e1-c62b-451a-84b3-28f3b829a741", 00:12:45.495 "is_configured": true, 00:12:45.495 "data_offset": 0, 00:12:45.495 "data_size": 65536 00:12:45.495 } 00:12:45.495 ] 00:12:45.495 }' 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.495 13:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.087 [2024-10-01 13:46:56.036776] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:46.087 "name": "Existed_Raid", 00:12:46.087 "aliases": [ 00:12:46.087 "121ebfa5-000f-4d71-bc47-183c15e960a7" 00:12:46.087 ], 00:12:46.087 "product_name": "Raid Volume", 00:12:46.087 "block_size": 512, 00:12:46.087 "num_blocks": 262144, 00:12:46.087 "uuid": "121ebfa5-000f-4d71-bc47-183c15e960a7", 00:12:46.087 "assigned_rate_limits": { 00:12:46.087 "rw_ios_per_sec": 0, 00:12:46.087 "rw_mbytes_per_sec": 0, 00:12:46.087 "r_mbytes_per_sec": 0, 00:12:46.087 "w_mbytes_per_sec": 0 00:12:46.087 }, 00:12:46.087 "claimed": false, 00:12:46.087 "zoned": false, 00:12:46.087 "supported_io_types": { 00:12:46.087 "read": true, 00:12:46.087 "write": true, 00:12:46.087 "unmap": true, 00:12:46.087 "flush": true, 00:12:46.087 "reset": true, 00:12:46.087 "nvme_admin": false, 00:12:46.087 "nvme_io": false, 00:12:46.087 "nvme_io_md": false, 00:12:46.087 "write_zeroes": true, 00:12:46.087 "zcopy": false, 00:12:46.087 "get_zone_info": false, 00:12:46.087 "zone_management": false, 00:12:46.087 "zone_append": false, 00:12:46.087 "compare": false, 00:12:46.087 "compare_and_write": false, 00:12:46.087 "abort": false, 00:12:46.087 "seek_hole": false, 00:12:46.087 "seek_data": false, 00:12:46.087 "copy": false, 00:12:46.087 "nvme_iov_md": false 00:12:46.087 }, 00:12:46.087 "memory_domains": [ 00:12:46.087 { 00:12:46.087 "dma_device_id": "system", 00:12:46.087 "dma_device_type": 1 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.087 "dma_device_type": 2 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "dma_device_id": "system", 00:12:46.087 "dma_device_type": 1 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.087 "dma_device_type": 2 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "dma_device_id": "system", 00:12:46.087 "dma_device_type": 1 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.087 "dma_device_type": 2 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "dma_device_id": "system", 00:12:46.087 "dma_device_type": 1 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.087 "dma_device_type": 2 00:12:46.087 } 00:12:46.087 ], 00:12:46.087 "driver_specific": { 00:12:46.087 "raid": { 00:12:46.087 "uuid": "121ebfa5-000f-4d71-bc47-183c15e960a7", 00:12:46.087 "strip_size_kb": 64, 00:12:46.087 "state": "online", 00:12:46.087 "raid_level": "raid0", 00:12:46.087 "superblock": false, 00:12:46.087 "num_base_bdevs": 4, 00:12:46.087 "num_base_bdevs_discovered": 4, 00:12:46.087 "num_base_bdevs_operational": 4, 00:12:46.087 "base_bdevs_list": [ 00:12:46.087 { 00:12:46.087 "name": "BaseBdev1", 00:12:46.087 "uuid": "dc56255d-7ebb-4060-b38c-6da13595764f", 00:12:46.087 "is_configured": true, 00:12:46.087 "data_offset": 0, 00:12:46.087 "data_size": 65536 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "name": "BaseBdev2", 00:12:46.087 "uuid": "259e6c1d-ee48-4456-b252-1fda1e20772c", 00:12:46.087 "is_configured": true, 00:12:46.087 "data_offset": 0, 00:12:46.087 "data_size": 65536 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "name": "BaseBdev3", 00:12:46.087 "uuid": "1f65cf8e-3372-46ce-aeca-33f0c78ace1e", 00:12:46.087 "is_configured": true, 00:12:46.087 "data_offset": 0, 00:12:46.087 "data_size": 65536 00:12:46.087 }, 00:12:46.087 { 00:12:46.087 "name": "BaseBdev4", 00:12:46.087 "uuid": "09b4c8e1-c62b-451a-84b3-28f3b829a741", 00:12:46.087 "is_configured": true, 00:12:46.087 "data_offset": 0, 00:12:46.087 "data_size": 65536 00:12:46.087 } 00:12:46.087 ] 00:12:46.087 } 00:12:46.087 } 00:12:46.087 }' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:46.087 BaseBdev2 00:12:46.087 BaseBdev3 00:12:46.087 BaseBdev4' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.087 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.347 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.347 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.347 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.347 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.347 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:46.347 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.347 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.348 [2024-10-01 13:46:56.368041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.348 [2024-10-01 13:46:56.368188] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.348 [2024-10-01 13:46:56.368334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.348 "name": "Existed_Raid", 00:12:46.348 "uuid": "121ebfa5-000f-4d71-bc47-183c15e960a7", 00:12:46.348 "strip_size_kb": 64, 00:12:46.348 "state": "offline", 00:12:46.348 "raid_level": "raid0", 00:12:46.348 "superblock": false, 00:12:46.348 "num_base_bdevs": 4, 00:12:46.348 "num_base_bdevs_discovered": 3, 00:12:46.348 "num_base_bdevs_operational": 3, 00:12:46.348 "base_bdevs_list": [ 00:12:46.348 { 00:12:46.348 "name": null, 00:12:46.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.348 "is_configured": false, 00:12:46.348 "data_offset": 0, 00:12:46.348 "data_size": 65536 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "name": "BaseBdev2", 00:12:46.348 "uuid": "259e6c1d-ee48-4456-b252-1fda1e20772c", 00:12:46.348 "is_configured": true, 00:12:46.348 "data_offset": 0, 00:12:46.348 "data_size": 65536 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "name": "BaseBdev3", 00:12:46.348 "uuid": "1f65cf8e-3372-46ce-aeca-33f0c78ace1e", 00:12:46.348 "is_configured": true, 00:12:46.348 "data_offset": 0, 00:12:46.348 "data_size": 65536 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "name": "BaseBdev4", 00:12:46.348 "uuid": "09b4c8e1-c62b-451a-84b3-28f3b829a741", 00:12:46.348 "is_configured": true, 00:12:46.348 "data_offset": 0, 00:12:46.348 "data_size": 65536 00:12:46.348 } 00:12:46.348 ] 00:12:46.348 }' 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.348 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.915 13:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.915 [2024-10-01 13:46:56.968140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.915 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.174 [2024-10-01 13:46:57.115711] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.174 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.174 [2024-10-01 13:46:57.272788] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:47.174 [2024-10-01 13:46:57.272958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:47.433 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.433 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.433 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.433 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.433 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:47.433 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.434 BaseBdev2 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.434 [ 00:12:47.434 { 00:12:47.434 "name": "BaseBdev2", 00:12:47.434 "aliases": [ 00:12:47.434 "b2af2d62-ec72-4237-b05f-fd47ebb0850a" 00:12:47.434 ], 00:12:47.434 "product_name": "Malloc disk", 00:12:47.434 "block_size": 512, 00:12:47.434 "num_blocks": 65536, 00:12:47.434 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:47.434 "assigned_rate_limits": { 00:12:47.434 "rw_ios_per_sec": 0, 00:12:47.434 "rw_mbytes_per_sec": 0, 00:12:47.434 "r_mbytes_per_sec": 0, 00:12:47.434 "w_mbytes_per_sec": 0 00:12:47.434 }, 00:12:47.434 "claimed": false, 00:12:47.434 "zoned": false, 00:12:47.434 "supported_io_types": { 00:12:47.434 "read": true, 00:12:47.434 "write": true, 00:12:47.434 "unmap": true, 00:12:47.434 "flush": true, 00:12:47.434 "reset": true, 00:12:47.434 "nvme_admin": false, 00:12:47.434 "nvme_io": false, 00:12:47.434 "nvme_io_md": false, 00:12:47.434 "write_zeroes": true, 00:12:47.434 "zcopy": true, 00:12:47.434 "get_zone_info": false, 00:12:47.434 "zone_management": false, 00:12:47.434 "zone_append": false, 00:12:47.434 "compare": false, 00:12:47.434 "compare_and_write": false, 00:12:47.434 "abort": true, 00:12:47.434 "seek_hole": false, 00:12:47.434 "seek_data": false, 00:12:47.434 "copy": true, 00:12:47.434 "nvme_iov_md": false 00:12:47.434 }, 00:12:47.434 "memory_domains": [ 00:12:47.434 { 00:12:47.434 "dma_device_id": "system", 00:12:47.434 "dma_device_type": 1 00:12:47.434 }, 00:12:47.434 { 00:12:47.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.434 "dma_device_type": 2 00:12:47.434 } 00:12:47.434 ], 00:12:47.434 "driver_specific": {} 00:12:47.434 } 00:12:47.434 ] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.434 BaseBdev3 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.434 [ 00:12:47.434 { 00:12:47.434 "name": "BaseBdev3", 00:12:47.434 "aliases": [ 00:12:47.434 "616ec8c4-9ef1-461b-a66a-f1e1bfe016df" 00:12:47.434 ], 00:12:47.434 "product_name": "Malloc disk", 00:12:47.434 "block_size": 512, 00:12:47.434 "num_blocks": 65536, 00:12:47.434 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:47.434 "assigned_rate_limits": { 00:12:47.434 "rw_ios_per_sec": 0, 00:12:47.434 "rw_mbytes_per_sec": 0, 00:12:47.434 "r_mbytes_per_sec": 0, 00:12:47.434 "w_mbytes_per_sec": 0 00:12:47.434 }, 00:12:47.434 "claimed": false, 00:12:47.434 "zoned": false, 00:12:47.434 "supported_io_types": { 00:12:47.434 "read": true, 00:12:47.434 "write": true, 00:12:47.434 "unmap": true, 00:12:47.434 "flush": true, 00:12:47.434 "reset": true, 00:12:47.434 "nvme_admin": false, 00:12:47.434 "nvme_io": false, 00:12:47.434 "nvme_io_md": false, 00:12:47.434 "write_zeroes": true, 00:12:47.434 "zcopy": true, 00:12:47.434 "get_zone_info": false, 00:12:47.434 "zone_management": false, 00:12:47.434 "zone_append": false, 00:12:47.434 "compare": false, 00:12:47.434 "compare_and_write": false, 00:12:47.434 "abort": true, 00:12:47.434 "seek_hole": false, 00:12:47.434 "seek_data": false, 00:12:47.434 "copy": true, 00:12:47.434 "nvme_iov_md": false 00:12:47.434 }, 00:12:47.434 "memory_domains": [ 00:12:47.434 { 00:12:47.434 "dma_device_id": "system", 00:12:47.434 "dma_device_type": 1 00:12:47.434 }, 00:12:47.434 { 00:12:47.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.434 "dma_device_type": 2 00:12:47.434 } 00:12:47.434 ], 00:12:47.434 "driver_specific": {} 00:12:47.434 } 00:12:47.434 ] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.434 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:47.435 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.435 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.435 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:47.435 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.435 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 BaseBdev4 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 [ 00:12:47.694 { 00:12:47.694 "name": "BaseBdev4", 00:12:47.694 "aliases": [ 00:12:47.694 "ccab6740-161f-43ef-b939-8e53b55ac130" 00:12:47.694 ], 00:12:47.694 "product_name": "Malloc disk", 00:12:47.694 "block_size": 512, 00:12:47.694 "num_blocks": 65536, 00:12:47.694 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:47.694 "assigned_rate_limits": { 00:12:47.694 "rw_ios_per_sec": 0, 00:12:47.694 "rw_mbytes_per_sec": 0, 00:12:47.694 "r_mbytes_per_sec": 0, 00:12:47.694 "w_mbytes_per_sec": 0 00:12:47.694 }, 00:12:47.694 "claimed": false, 00:12:47.694 "zoned": false, 00:12:47.694 "supported_io_types": { 00:12:47.694 "read": true, 00:12:47.694 "write": true, 00:12:47.694 "unmap": true, 00:12:47.694 "flush": true, 00:12:47.694 "reset": true, 00:12:47.694 "nvme_admin": false, 00:12:47.694 "nvme_io": false, 00:12:47.694 "nvme_io_md": false, 00:12:47.694 "write_zeroes": true, 00:12:47.694 "zcopy": true, 00:12:47.694 "get_zone_info": false, 00:12:47.694 "zone_management": false, 00:12:47.694 "zone_append": false, 00:12:47.694 "compare": false, 00:12:47.694 "compare_and_write": false, 00:12:47.694 "abort": true, 00:12:47.694 "seek_hole": false, 00:12:47.694 "seek_data": false, 00:12:47.694 "copy": true, 00:12:47.694 "nvme_iov_md": false 00:12:47.694 }, 00:12:47.694 "memory_domains": [ 00:12:47.694 { 00:12:47.694 "dma_device_id": "system", 00:12:47.694 "dma_device_type": 1 00:12:47.694 }, 00:12:47.694 { 00:12:47.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.694 "dma_device_type": 2 00:12:47.694 } 00:12:47.694 ], 00:12:47.694 "driver_specific": {} 00:12:47.694 } 00:12:47.694 ] 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 [2024-10-01 13:46:57.687653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.694 [2024-10-01 13:46:57.687829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.694 [2024-10-01 13:46:57.687938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.694 [2024-10-01 13:46:57.690043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.694 [2024-10-01 13:46:57.690205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.694 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.694 "name": "Existed_Raid", 00:12:47.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.694 "strip_size_kb": 64, 00:12:47.694 "state": "configuring", 00:12:47.694 "raid_level": "raid0", 00:12:47.694 "superblock": false, 00:12:47.694 "num_base_bdevs": 4, 00:12:47.694 "num_base_bdevs_discovered": 3, 00:12:47.694 "num_base_bdevs_operational": 4, 00:12:47.694 "base_bdevs_list": [ 00:12:47.694 { 00:12:47.694 "name": "BaseBdev1", 00:12:47.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.694 "is_configured": false, 00:12:47.694 "data_offset": 0, 00:12:47.694 "data_size": 0 00:12:47.694 }, 00:12:47.694 { 00:12:47.695 "name": "BaseBdev2", 00:12:47.695 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:47.695 "is_configured": true, 00:12:47.695 "data_offset": 0, 00:12:47.695 "data_size": 65536 00:12:47.695 }, 00:12:47.695 { 00:12:47.695 "name": "BaseBdev3", 00:12:47.695 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:47.695 "is_configured": true, 00:12:47.695 "data_offset": 0, 00:12:47.695 "data_size": 65536 00:12:47.695 }, 00:12:47.695 { 00:12:47.695 "name": "BaseBdev4", 00:12:47.695 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:47.695 "is_configured": true, 00:12:47.695 "data_offset": 0, 00:12:47.695 "data_size": 65536 00:12:47.695 } 00:12:47.695 ] 00:12:47.695 }' 00:12:47.695 13:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.695 13:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.953 [2024-10-01 13:46:58.135581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.953 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.212 "name": "Existed_Raid", 00:12:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.212 "strip_size_kb": 64, 00:12:48.212 "state": "configuring", 00:12:48.212 "raid_level": "raid0", 00:12:48.212 "superblock": false, 00:12:48.212 "num_base_bdevs": 4, 00:12:48.212 "num_base_bdevs_discovered": 2, 00:12:48.212 "num_base_bdevs_operational": 4, 00:12:48.212 "base_bdevs_list": [ 00:12:48.212 { 00:12:48.212 "name": "BaseBdev1", 00:12:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.212 "is_configured": false, 00:12:48.212 "data_offset": 0, 00:12:48.212 "data_size": 0 00:12:48.212 }, 00:12:48.212 { 00:12:48.212 "name": null, 00:12:48.212 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:48.212 "is_configured": false, 00:12:48.212 "data_offset": 0, 00:12:48.212 "data_size": 65536 00:12:48.212 }, 00:12:48.212 { 00:12:48.212 "name": "BaseBdev3", 00:12:48.212 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:48.212 "is_configured": true, 00:12:48.212 "data_offset": 0, 00:12:48.212 "data_size": 65536 00:12:48.212 }, 00:12:48.212 { 00:12:48.212 "name": "BaseBdev4", 00:12:48.212 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:48.212 "is_configured": true, 00:12:48.212 "data_offset": 0, 00:12:48.212 "data_size": 65536 00:12:48.212 } 00:12:48.212 ] 00:12:48.212 }' 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.212 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 [2024-10-01 13:46:58.617952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.471 BaseBdev1 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.471 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 [ 00:12:48.471 { 00:12:48.471 "name": "BaseBdev1", 00:12:48.471 "aliases": [ 00:12:48.471 "d5db4985-0eec-43eb-9994-d84d2fad8ca9" 00:12:48.471 ], 00:12:48.471 "product_name": "Malloc disk", 00:12:48.471 "block_size": 512, 00:12:48.471 "num_blocks": 65536, 00:12:48.471 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:48.471 "assigned_rate_limits": { 00:12:48.471 "rw_ios_per_sec": 0, 00:12:48.471 "rw_mbytes_per_sec": 0, 00:12:48.471 "r_mbytes_per_sec": 0, 00:12:48.471 "w_mbytes_per_sec": 0 00:12:48.471 }, 00:12:48.471 "claimed": true, 00:12:48.471 "claim_type": "exclusive_write", 00:12:48.471 "zoned": false, 00:12:48.471 "supported_io_types": { 00:12:48.471 "read": true, 00:12:48.471 "write": true, 00:12:48.471 "unmap": true, 00:12:48.471 "flush": true, 00:12:48.471 "reset": true, 00:12:48.471 "nvme_admin": false, 00:12:48.471 "nvme_io": false, 00:12:48.471 "nvme_io_md": false, 00:12:48.471 "write_zeroes": true, 00:12:48.471 "zcopy": true, 00:12:48.471 "get_zone_info": false, 00:12:48.471 "zone_management": false, 00:12:48.471 "zone_append": false, 00:12:48.471 "compare": false, 00:12:48.471 "compare_and_write": false, 00:12:48.471 "abort": true, 00:12:48.471 "seek_hole": false, 00:12:48.471 "seek_data": false, 00:12:48.471 "copy": true, 00:12:48.471 "nvme_iov_md": false 00:12:48.471 }, 00:12:48.471 "memory_domains": [ 00:12:48.471 { 00:12:48.471 "dma_device_id": "system", 00:12:48.471 "dma_device_type": 1 00:12:48.471 }, 00:12:48.471 { 00:12:48.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.471 "dma_device_type": 2 00:12:48.471 } 00:12:48.730 ], 00:12:48.730 "driver_specific": {} 00:12:48.730 } 00:12:48.730 ] 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.730 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.730 "name": "Existed_Raid", 00:12:48.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.730 "strip_size_kb": 64, 00:12:48.730 "state": "configuring", 00:12:48.730 "raid_level": "raid0", 00:12:48.730 "superblock": false, 00:12:48.730 "num_base_bdevs": 4, 00:12:48.730 "num_base_bdevs_discovered": 3, 00:12:48.730 "num_base_bdevs_operational": 4, 00:12:48.730 "base_bdevs_list": [ 00:12:48.730 { 00:12:48.730 "name": "BaseBdev1", 00:12:48.730 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:48.730 "is_configured": true, 00:12:48.730 "data_offset": 0, 00:12:48.730 "data_size": 65536 00:12:48.730 }, 00:12:48.730 { 00:12:48.731 "name": null, 00:12:48.731 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:48.731 "is_configured": false, 00:12:48.731 "data_offset": 0, 00:12:48.731 "data_size": 65536 00:12:48.731 }, 00:12:48.731 { 00:12:48.731 "name": "BaseBdev3", 00:12:48.731 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:48.731 "is_configured": true, 00:12:48.731 "data_offset": 0, 00:12:48.731 "data_size": 65536 00:12:48.731 }, 00:12:48.731 { 00:12:48.731 "name": "BaseBdev4", 00:12:48.731 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:48.731 "is_configured": true, 00:12:48.731 "data_offset": 0, 00:12:48.731 "data_size": 65536 00:12:48.731 } 00:12:48.731 ] 00:12:48.731 }' 00:12:48.731 13:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.731 13:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:48.989 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.990 [2024-10-01 13:46:59.129432] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.990 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.260 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.260 "name": "Existed_Raid", 00:12:49.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.260 "strip_size_kb": 64, 00:12:49.260 "state": "configuring", 00:12:49.260 "raid_level": "raid0", 00:12:49.260 "superblock": false, 00:12:49.260 "num_base_bdevs": 4, 00:12:49.260 "num_base_bdevs_discovered": 2, 00:12:49.260 "num_base_bdevs_operational": 4, 00:12:49.260 "base_bdevs_list": [ 00:12:49.260 { 00:12:49.260 "name": "BaseBdev1", 00:12:49.260 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:49.260 "is_configured": true, 00:12:49.260 "data_offset": 0, 00:12:49.260 "data_size": 65536 00:12:49.260 }, 00:12:49.260 { 00:12:49.260 "name": null, 00:12:49.260 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:49.260 "is_configured": false, 00:12:49.260 "data_offset": 0, 00:12:49.260 "data_size": 65536 00:12:49.260 }, 00:12:49.260 { 00:12:49.261 "name": null, 00:12:49.261 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:49.261 "is_configured": false, 00:12:49.261 "data_offset": 0, 00:12:49.261 "data_size": 65536 00:12:49.261 }, 00:12:49.261 { 00:12:49.261 "name": "BaseBdev4", 00:12:49.261 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:49.261 "is_configured": true, 00:12:49.261 "data_offset": 0, 00:12:49.261 "data_size": 65536 00:12:49.261 } 00:12:49.261 ] 00:12:49.261 }' 00:12:49.261 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.261 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.551 [2024-10-01 13:46:59.620761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.551 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.551 "name": "Existed_Raid", 00:12:49.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.551 "strip_size_kb": 64, 00:12:49.552 "state": "configuring", 00:12:49.552 "raid_level": "raid0", 00:12:49.552 "superblock": false, 00:12:49.552 "num_base_bdevs": 4, 00:12:49.552 "num_base_bdevs_discovered": 3, 00:12:49.552 "num_base_bdevs_operational": 4, 00:12:49.552 "base_bdevs_list": [ 00:12:49.552 { 00:12:49.552 "name": "BaseBdev1", 00:12:49.552 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:49.552 "is_configured": true, 00:12:49.552 "data_offset": 0, 00:12:49.552 "data_size": 65536 00:12:49.552 }, 00:12:49.552 { 00:12:49.552 "name": null, 00:12:49.552 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:49.552 "is_configured": false, 00:12:49.552 "data_offset": 0, 00:12:49.552 "data_size": 65536 00:12:49.552 }, 00:12:49.552 { 00:12:49.552 "name": "BaseBdev3", 00:12:49.552 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:49.552 "is_configured": true, 00:12:49.552 "data_offset": 0, 00:12:49.552 "data_size": 65536 00:12:49.552 }, 00:12:49.552 { 00:12:49.552 "name": "BaseBdev4", 00:12:49.552 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:49.552 "is_configured": true, 00:12:49.552 "data_offset": 0, 00:12:49.552 "data_size": 65536 00:12:49.552 } 00:12:49.552 ] 00:12:49.552 }' 00:12:49.552 13:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.552 13:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.118 [2024-10-01 13:47:00.096115] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.118 "name": "Existed_Raid", 00:12:50.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.118 "strip_size_kb": 64, 00:12:50.118 "state": "configuring", 00:12:50.118 "raid_level": "raid0", 00:12:50.118 "superblock": false, 00:12:50.118 "num_base_bdevs": 4, 00:12:50.118 "num_base_bdevs_discovered": 2, 00:12:50.118 "num_base_bdevs_operational": 4, 00:12:50.118 "base_bdevs_list": [ 00:12:50.118 { 00:12:50.118 "name": null, 00:12:50.118 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:50.118 "is_configured": false, 00:12:50.118 "data_offset": 0, 00:12:50.118 "data_size": 65536 00:12:50.118 }, 00:12:50.118 { 00:12:50.118 "name": null, 00:12:50.118 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:50.118 "is_configured": false, 00:12:50.118 "data_offset": 0, 00:12:50.118 "data_size": 65536 00:12:50.118 }, 00:12:50.118 { 00:12:50.118 "name": "BaseBdev3", 00:12:50.118 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:50.118 "is_configured": true, 00:12:50.118 "data_offset": 0, 00:12:50.118 "data_size": 65536 00:12:50.118 }, 00:12:50.118 { 00:12:50.118 "name": "BaseBdev4", 00:12:50.118 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:50.118 "is_configured": true, 00:12:50.118 "data_offset": 0, 00:12:50.118 "data_size": 65536 00:12:50.118 } 00:12:50.118 ] 00:12:50.118 }' 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.118 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.684 [2024-10-01 13:47:00.676141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.684 "name": "Existed_Raid", 00:12:50.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.684 "strip_size_kb": 64, 00:12:50.684 "state": "configuring", 00:12:50.684 "raid_level": "raid0", 00:12:50.684 "superblock": false, 00:12:50.684 "num_base_bdevs": 4, 00:12:50.684 "num_base_bdevs_discovered": 3, 00:12:50.684 "num_base_bdevs_operational": 4, 00:12:50.684 "base_bdevs_list": [ 00:12:50.684 { 00:12:50.684 "name": null, 00:12:50.684 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:50.684 "is_configured": false, 00:12:50.684 "data_offset": 0, 00:12:50.684 "data_size": 65536 00:12:50.684 }, 00:12:50.684 { 00:12:50.684 "name": "BaseBdev2", 00:12:50.684 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:50.684 "is_configured": true, 00:12:50.684 "data_offset": 0, 00:12:50.684 "data_size": 65536 00:12:50.684 }, 00:12:50.684 { 00:12:50.684 "name": "BaseBdev3", 00:12:50.684 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:50.684 "is_configured": true, 00:12:50.684 "data_offset": 0, 00:12:50.684 "data_size": 65536 00:12:50.684 }, 00:12:50.684 { 00:12:50.684 "name": "BaseBdev4", 00:12:50.684 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:50.684 "is_configured": true, 00:12:50.684 "data_offset": 0, 00:12:50.684 "data_size": 65536 00:12:50.684 } 00:12:50.684 ] 00:12:50.684 }' 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.684 13:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.942 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.942 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.942 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.942 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.942 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.942 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d5db4985-0eec-43eb-9994-d84d2fad8ca9 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.200 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.200 [2024-10-01 13:47:01.213245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:51.200 [2024-10-01 13:47:01.213304] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:51.200 [2024-10-01 13:47:01.213314] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:51.200 [2024-10-01 13:47:01.213614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:51.200 NewBaseBdev 00:12:51.200 [2024-10-01 13:47:01.213757] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:51.201 [2024-10-01 13:47:01.213771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:51.201 [2024-10-01 13:47:01.214027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.201 [ 00:12:51.201 { 00:12:51.201 "name": "NewBaseBdev", 00:12:51.201 "aliases": [ 00:12:51.201 "d5db4985-0eec-43eb-9994-d84d2fad8ca9" 00:12:51.201 ], 00:12:51.201 "product_name": "Malloc disk", 00:12:51.201 "block_size": 512, 00:12:51.201 "num_blocks": 65536, 00:12:51.201 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:51.201 "assigned_rate_limits": { 00:12:51.201 "rw_ios_per_sec": 0, 00:12:51.201 "rw_mbytes_per_sec": 0, 00:12:51.201 "r_mbytes_per_sec": 0, 00:12:51.201 "w_mbytes_per_sec": 0 00:12:51.201 }, 00:12:51.201 "claimed": true, 00:12:51.201 "claim_type": "exclusive_write", 00:12:51.201 "zoned": false, 00:12:51.201 "supported_io_types": { 00:12:51.201 "read": true, 00:12:51.201 "write": true, 00:12:51.201 "unmap": true, 00:12:51.201 "flush": true, 00:12:51.201 "reset": true, 00:12:51.201 "nvme_admin": false, 00:12:51.201 "nvme_io": false, 00:12:51.201 "nvme_io_md": false, 00:12:51.201 "write_zeroes": true, 00:12:51.201 "zcopy": true, 00:12:51.201 "get_zone_info": false, 00:12:51.201 "zone_management": false, 00:12:51.201 "zone_append": false, 00:12:51.201 "compare": false, 00:12:51.201 "compare_and_write": false, 00:12:51.201 "abort": true, 00:12:51.201 "seek_hole": false, 00:12:51.201 "seek_data": false, 00:12:51.201 "copy": true, 00:12:51.201 "nvme_iov_md": false 00:12:51.201 }, 00:12:51.201 "memory_domains": [ 00:12:51.201 { 00:12:51.201 "dma_device_id": "system", 00:12:51.201 "dma_device_type": 1 00:12:51.201 }, 00:12:51.201 { 00:12:51.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.201 "dma_device_type": 2 00:12:51.201 } 00:12:51.201 ], 00:12:51.201 "driver_specific": {} 00:12:51.201 } 00:12:51.201 ] 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.201 "name": "Existed_Raid", 00:12:51.201 "uuid": "f18eb630-9165-4a09-9b37-59b91a91876c", 00:12:51.201 "strip_size_kb": 64, 00:12:51.201 "state": "online", 00:12:51.201 "raid_level": "raid0", 00:12:51.201 "superblock": false, 00:12:51.201 "num_base_bdevs": 4, 00:12:51.201 "num_base_bdevs_discovered": 4, 00:12:51.201 "num_base_bdevs_operational": 4, 00:12:51.201 "base_bdevs_list": [ 00:12:51.201 { 00:12:51.201 "name": "NewBaseBdev", 00:12:51.201 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:51.201 "is_configured": true, 00:12:51.201 "data_offset": 0, 00:12:51.201 "data_size": 65536 00:12:51.201 }, 00:12:51.201 { 00:12:51.201 "name": "BaseBdev2", 00:12:51.201 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:51.201 "is_configured": true, 00:12:51.201 "data_offset": 0, 00:12:51.201 "data_size": 65536 00:12:51.201 }, 00:12:51.201 { 00:12:51.201 "name": "BaseBdev3", 00:12:51.201 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:51.201 "is_configured": true, 00:12:51.201 "data_offset": 0, 00:12:51.201 "data_size": 65536 00:12:51.201 }, 00:12:51.201 { 00:12:51.201 "name": "BaseBdev4", 00:12:51.201 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:51.201 "is_configured": true, 00:12:51.201 "data_offset": 0, 00:12:51.201 "data_size": 65536 00:12:51.201 } 00:12:51.201 ] 00:12:51.201 }' 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.201 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.768 [2024-10-01 13:47:01.728939] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.768 "name": "Existed_Raid", 00:12:51.768 "aliases": [ 00:12:51.768 "f18eb630-9165-4a09-9b37-59b91a91876c" 00:12:51.768 ], 00:12:51.768 "product_name": "Raid Volume", 00:12:51.768 "block_size": 512, 00:12:51.768 "num_blocks": 262144, 00:12:51.768 "uuid": "f18eb630-9165-4a09-9b37-59b91a91876c", 00:12:51.768 "assigned_rate_limits": { 00:12:51.768 "rw_ios_per_sec": 0, 00:12:51.768 "rw_mbytes_per_sec": 0, 00:12:51.768 "r_mbytes_per_sec": 0, 00:12:51.768 "w_mbytes_per_sec": 0 00:12:51.768 }, 00:12:51.768 "claimed": false, 00:12:51.768 "zoned": false, 00:12:51.768 "supported_io_types": { 00:12:51.768 "read": true, 00:12:51.768 "write": true, 00:12:51.768 "unmap": true, 00:12:51.768 "flush": true, 00:12:51.768 "reset": true, 00:12:51.768 "nvme_admin": false, 00:12:51.768 "nvme_io": false, 00:12:51.768 "nvme_io_md": false, 00:12:51.768 "write_zeroes": true, 00:12:51.768 "zcopy": false, 00:12:51.768 "get_zone_info": false, 00:12:51.768 "zone_management": false, 00:12:51.768 "zone_append": false, 00:12:51.768 "compare": false, 00:12:51.768 "compare_and_write": false, 00:12:51.768 "abort": false, 00:12:51.768 "seek_hole": false, 00:12:51.768 "seek_data": false, 00:12:51.768 "copy": false, 00:12:51.768 "nvme_iov_md": false 00:12:51.768 }, 00:12:51.768 "memory_domains": [ 00:12:51.768 { 00:12:51.768 "dma_device_id": "system", 00:12:51.768 "dma_device_type": 1 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.768 "dma_device_type": 2 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "dma_device_id": "system", 00:12:51.768 "dma_device_type": 1 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.768 "dma_device_type": 2 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "dma_device_id": "system", 00:12:51.768 "dma_device_type": 1 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.768 "dma_device_type": 2 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "dma_device_id": "system", 00:12:51.768 "dma_device_type": 1 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.768 "dma_device_type": 2 00:12:51.768 } 00:12:51.768 ], 00:12:51.768 "driver_specific": { 00:12:51.768 "raid": { 00:12:51.768 "uuid": "f18eb630-9165-4a09-9b37-59b91a91876c", 00:12:51.768 "strip_size_kb": 64, 00:12:51.768 "state": "online", 00:12:51.768 "raid_level": "raid0", 00:12:51.768 "superblock": false, 00:12:51.768 "num_base_bdevs": 4, 00:12:51.768 "num_base_bdevs_discovered": 4, 00:12:51.768 "num_base_bdevs_operational": 4, 00:12:51.768 "base_bdevs_list": [ 00:12:51.768 { 00:12:51.768 "name": "NewBaseBdev", 00:12:51.768 "uuid": "d5db4985-0eec-43eb-9994-d84d2fad8ca9", 00:12:51.768 "is_configured": true, 00:12:51.768 "data_offset": 0, 00:12:51.768 "data_size": 65536 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "name": "BaseBdev2", 00:12:51.768 "uuid": "b2af2d62-ec72-4237-b05f-fd47ebb0850a", 00:12:51.768 "is_configured": true, 00:12:51.768 "data_offset": 0, 00:12:51.768 "data_size": 65536 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "name": "BaseBdev3", 00:12:51.768 "uuid": "616ec8c4-9ef1-461b-a66a-f1e1bfe016df", 00:12:51.768 "is_configured": true, 00:12:51.768 "data_offset": 0, 00:12:51.768 "data_size": 65536 00:12:51.768 }, 00:12:51.768 { 00:12:51.768 "name": "BaseBdev4", 00:12:51.768 "uuid": "ccab6740-161f-43ef-b939-8e53b55ac130", 00:12:51.768 "is_configured": true, 00:12:51.768 "data_offset": 0, 00:12:51.768 "data_size": 65536 00:12:51.768 } 00:12:51.768 ] 00:12:51.768 } 00:12:51.768 } 00:12:51.768 }' 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:51.768 BaseBdev2 00:12:51.768 BaseBdev3 00:12:51.768 BaseBdev4' 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.768 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.769 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.027 13:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.027 [2024-10-01 13:47:02.044522] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.027 [2024-10-01 13:47:02.044554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.027 [2024-10-01 13:47:02.044634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.027 [2024-10-01 13:47:02.044703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.027 [2024-10-01 13:47:02.044715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69269 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69269 ']' 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69269 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69269 00:12:52.027 killing process with pid 69269 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69269' 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69269 00:12:52.027 [2024-10-01 13:47:02.095875] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.027 13:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69269 00:12:52.592 [2024-10-01 13:47:02.497378] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.639 13:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:53.639 00:12:53.639 real 0m11.593s 00:12:53.639 user 0m18.294s 00:12:53.639 sys 0m2.329s 00:12:53.639 13:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.639 ************************************ 00:12:53.639 END TEST raid_state_function_test 00:12:53.639 ************************************ 00:12:53.639 13:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.898 13:47:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:53.898 13:47:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:53.898 13:47:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.898 13:47:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.898 ************************************ 00:12:53.898 START TEST raid_state_function_test_sb 00:12:53.898 ************************************ 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:53.898 Process raid pid: 69939 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69939 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69939' 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69939 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 69939 ']' 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.898 13:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.898 [2024-10-01 13:47:03.975497] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:53.898 [2024-10-01 13:47:03.976272] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.157 [2024-10-01 13:47:04.148055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.415 [2024-10-01 13:47:04.364080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.415 [2024-10-01 13:47:04.580072] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.415 [2024-10-01 13:47:04.580321] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.674 [2024-10-01 13:47:04.816762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.674 [2024-10-01 13:47:04.816946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.674 [2024-10-01 13:47:04.816972] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.674 [2024-10-01 13:47:04.816987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.674 [2024-10-01 13:47:04.816994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.674 [2024-10-01 13:47:04.817009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.674 [2024-10-01 13:47:04.817016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:54.674 [2024-10-01 13:47:04.817028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.674 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.933 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.933 "name": "Existed_Raid", 00:12:54.933 "uuid": "0f161686-182d-4001-ae1e-2577d72321fc", 00:12:54.933 "strip_size_kb": 64, 00:12:54.933 "state": "configuring", 00:12:54.933 "raid_level": "raid0", 00:12:54.933 "superblock": true, 00:12:54.933 "num_base_bdevs": 4, 00:12:54.933 "num_base_bdevs_discovered": 0, 00:12:54.933 "num_base_bdevs_operational": 4, 00:12:54.933 "base_bdevs_list": [ 00:12:54.933 { 00:12:54.933 "name": "BaseBdev1", 00:12:54.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.933 "is_configured": false, 00:12:54.933 "data_offset": 0, 00:12:54.933 "data_size": 0 00:12:54.933 }, 00:12:54.933 { 00:12:54.933 "name": "BaseBdev2", 00:12:54.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.933 "is_configured": false, 00:12:54.933 "data_offset": 0, 00:12:54.933 "data_size": 0 00:12:54.933 }, 00:12:54.933 { 00:12:54.933 "name": "BaseBdev3", 00:12:54.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.933 "is_configured": false, 00:12:54.933 "data_offset": 0, 00:12:54.933 "data_size": 0 00:12:54.933 }, 00:12:54.933 { 00:12:54.933 "name": "BaseBdev4", 00:12:54.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.933 "is_configured": false, 00:12:54.933 "data_offset": 0, 00:12:54.933 "data_size": 0 00:12:54.933 } 00:12:54.933 ] 00:12:54.933 }' 00:12:54.933 13:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.933 13:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.191 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.191 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.191 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.191 [2024-10-01 13:47:05.236122] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.191 [2024-10-01 13:47:05.236183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:55.191 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.192 [2024-10-01 13:47:05.248117] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.192 [2024-10-01 13:47:05.248284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.192 [2024-10-01 13:47:05.248305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.192 [2024-10-01 13:47:05.248319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.192 [2024-10-01 13:47:05.248327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.192 [2024-10-01 13:47:05.248339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.192 [2024-10-01 13:47:05.248348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:55.192 [2024-10-01 13:47:05.248360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.192 [2024-10-01 13:47:05.306713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.192 BaseBdev1 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.192 [ 00:12:55.192 { 00:12:55.192 "name": "BaseBdev1", 00:12:55.192 "aliases": [ 00:12:55.192 "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb" 00:12:55.192 ], 00:12:55.192 "product_name": "Malloc disk", 00:12:55.192 "block_size": 512, 00:12:55.192 "num_blocks": 65536, 00:12:55.192 "uuid": "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb", 00:12:55.192 "assigned_rate_limits": { 00:12:55.192 "rw_ios_per_sec": 0, 00:12:55.192 "rw_mbytes_per_sec": 0, 00:12:55.192 "r_mbytes_per_sec": 0, 00:12:55.192 "w_mbytes_per_sec": 0 00:12:55.192 }, 00:12:55.192 "claimed": true, 00:12:55.192 "claim_type": "exclusive_write", 00:12:55.192 "zoned": false, 00:12:55.192 "supported_io_types": { 00:12:55.192 "read": true, 00:12:55.192 "write": true, 00:12:55.192 "unmap": true, 00:12:55.192 "flush": true, 00:12:55.192 "reset": true, 00:12:55.192 "nvme_admin": false, 00:12:55.192 "nvme_io": false, 00:12:55.192 "nvme_io_md": false, 00:12:55.192 "write_zeroes": true, 00:12:55.192 "zcopy": true, 00:12:55.192 "get_zone_info": false, 00:12:55.192 "zone_management": false, 00:12:55.192 "zone_append": false, 00:12:55.192 "compare": false, 00:12:55.192 "compare_and_write": false, 00:12:55.192 "abort": true, 00:12:55.192 "seek_hole": false, 00:12:55.192 "seek_data": false, 00:12:55.192 "copy": true, 00:12:55.192 "nvme_iov_md": false 00:12:55.192 }, 00:12:55.192 "memory_domains": [ 00:12:55.192 { 00:12:55.192 "dma_device_id": "system", 00:12:55.192 "dma_device_type": 1 00:12:55.192 }, 00:12:55.192 { 00:12:55.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.192 "dma_device_type": 2 00:12:55.192 } 00:12:55.192 ], 00:12:55.192 "driver_specific": {} 00:12:55.192 } 00:12:55.192 ] 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.192 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.451 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.451 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.451 "name": "Existed_Raid", 00:12:55.451 "uuid": "9716ff0e-1f13-4b26-aac7-d24af2294500", 00:12:55.451 "strip_size_kb": 64, 00:12:55.451 "state": "configuring", 00:12:55.451 "raid_level": "raid0", 00:12:55.451 "superblock": true, 00:12:55.451 "num_base_bdevs": 4, 00:12:55.451 "num_base_bdevs_discovered": 1, 00:12:55.451 "num_base_bdevs_operational": 4, 00:12:55.451 "base_bdevs_list": [ 00:12:55.451 { 00:12:55.451 "name": "BaseBdev1", 00:12:55.451 "uuid": "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb", 00:12:55.451 "is_configured": true, 00:12:55.451 "data_offset": 2048, 00:12:55.451 "data_size": 63488 00:12:55.451 }, 00:12:55.451 { 00:12:55.451 "name": "BaseBdev2", 00:12:55.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.451 "is_configured": false, 00:12:55.451 "data_offset": 0, 00:12:55.451 "data_size": 0 00:12:55.451 }, 00:12:55.451 { 00:12:55.451 "name": "BaseBdev3", 00:12:55.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.451 "is_configured": false, 00:12:55.451 "data_offset": 0, 00:12:55.451 "data_size": 0 00:12:55.451 }, 00:12:55.451 { 00:12:55.451 "name": "BaseBdev4", 00:12:55.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.451 "is_configured": false, 00:12:55.451 "data_offset": 0, 00:12:55.451 "data_size": 0 00:12:55.451 } 00:12:55.451 ] 00:12:55.451 }' 00:12:55.451 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.451 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.709 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.709 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.709 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.709 [2024-10-01 13:47:05.786377] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.709 [2024-10-01 13:47:05.786571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:55.709 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.709 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.709 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.709 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.709 [2024-10-01 13:47:05.798426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.709 [2024-10-01 13:47:05.800645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.709 [2024-10-01 13:47:05.800791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.709 [2024-10-01 13:47:05.800906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.710 [2024-10-01 13:47:05.800956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.710 [2024-10-01 13:47:05.800985] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:55.710 [2024-10-01 13:47:05.801017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.710 "name": "Existed_Raid", 00:12:55.710 "uuid": "08d96063-62e3-4b2e-a294-5a7baea534ea", 00:12:55.710 "strip_size_kb": 64, 00:12:55.710 "state": "configuring", 00:12:55.710 "raid_level": "raid0", 00:12:55.710 "superblock": true, 00:12:55.710 "num_base_bdevs": 4, 00:12:55.710 "num_base_bdevs_discovered": 1, 00:12:55.710 "num_base_bdevs_operational": 4, 00:12:55.710 "base_bdevs_list": [ 00:12:55.710 { 00:12:55.710 "name": "BaseBdev1", 00:12:55.710 "uuid": "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb", 00:12:55.710 "is_configured": true, 00:12:55.710 "data_offset": 2048, 00:12:55.710 "data_size": 63488 00:12:55.710 }, 00:12:55.710 { 00:12:55.710 "name": "BaseBdev2", 00:12:55.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.710 "is_configured": false, 00:12:55.710 "data_offset": 0, 00:12:55.710 "data_size": 0 00:12:55.710 }, 00:12:55.710 { 00:12:55.710 "name": "BaseBdev3", 00:12:55.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.710 "is_configured": false, 00:12:55.710 "data_offset": 0, 00:12:55.710 "data_size": 0 00:12:55.710 }, 00:12:55.710 { 00:12:55.710 "name": "BaseBdev4", 00:12:55.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.710 "is_configured": false, 00:12:55.710 "data_offset": 0, 00:12:55.710 "data_size": 0 00:12:55.710 } 00:12:55.710 ] 00:12:55.710 }' 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.710 13:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 [2024-10-01 13:47:06.264331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.278 BaseBdev2 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 [ 00:12:56.278 { 00:12:56.278 "name": "BaseBdev2", 00:12:56.278 "aliases": [ 00:12:56.278 "69d73934-5f2b-4d28-aacc-e5b1c6a1f5f4" 00:12:56.278 ], 00:12:56.278 "product_name": "Malloc disk", 00:12:56.278 "block_size": 512, 00:12:56.278 "num_blocks": 65536, 00:12:56.278 "uuid": "69d73934-5f2b-4d28-aacc-e5b1c6a1f5f4", 00:12:56.278 "assigned_rate_limits": { 00:12:56.278 "rw_ios_per_sec": 0, 00:12:56.278 "rw_mbytes_per_sec": 0, 00:12:56.278 "r_mbytes_per_sec": 0, 00:12:56.278 "w_mbytes_per_sec": 0 00:12:56.278 }, 00:12:56.278 "claimed": true, 00:12:56.278 "claim_type": "exclusive_write", 00:12:56.278 "zoned": false, 00:12:56.278 "supported_io_types": { 00:12:56.278 "read": true, 00:12:56.278 "write": true, 00:12:56.278 "unmap": true, 00:12:56.278 "flush": true, 00:12:56.278 "reset": true, 00:12:56.278 "nvme_admin": false, 00:12:56.278 "nvme_io": false, 00:12:56.278 "nvme_io_md": false, 00:12:56.278 "write_zeroes": true, 00:12:56.278 "zcopy": true, 00:12:56.278 "get_zone_info": false, 00:12:56.278 "zone_management": false, 00:12:56.278 "zone_append": false, 00:12:56.278 "compare": false, 00:12:56.278 "compare_and_write": false, 00:12:56.278 "abort": true, 00:12:56.278 "seek_hole": false, 00:12:56.278 "seek_data": false, 00:12:56.278 "copy": true, 00:12:56.278 "nvme_iov_md": false 00:12:56.278 }, 00:12:56.278 "memory_domains": [ 00:12:56.278 { 00:12:56.278 "dma_device_id": "system", 00:12:56.278 "dma_device_type": 1 00:12:56.278 }, 00:12:56.278 { 00:12:56.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.278 "dma_device_type": 2 00:12:56.278 } 00:12:56.278 ], 00:12:56.278 "driver_specific": {} 00:12:56.278 } 00:12:56.278 ] 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.278 "name": "Existed_Raid", 00:12:56.278 "uuid": "08d96063-62e3-4b2e-a294-5a7baea534ea", 00:12:56.278 "strip_size_kb": 64, 00:12:56.278 "state": "configuring", 00:12:56.278 "raid_level": "raid0", 00:12:56.278 "superblock": true, 00:12:56.278 "num_base_bdevs": 4, 00:12:56.278 "num_base_bdevs_discovered": 2, 00:12:56.278 "num_base_bdevs_operational": 4, 00:12:56.278 "base_bdevs_list": [ 00:12:56.278 { 00:12:56.278 "name": "BaseBdev1", 00:12:56.278 "uuid": "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb", 00:12:56.278 "is_configured": true, 00:12:56.278 "data_offset": 2048, 00:12:56.278 "data_size": 63488 00:12:56.278 }, 00:12:56.278 { 00:12:56.278 "name": "BaseBdev2", 00:12:56.278 "uuid": "69d73934-5f2b-4d28-aacc-e5b1c6a1f5f4", 00:12:56.278 "is_configured": true, 00:12:56.278 "data_offset": 2048, 00:12:56.278 "data_size": 63488 00:12:56.278 }, 00:12:56.278 { 00:12:56.278 "name": "BaseBdev3", 00:12:56.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.278 "is_configured": false, 00:12:56.278 "data_offset": 0, 00:12:56.278 "data_size": 0 00:12:56.278 }, 00:12:56.278 { 00:12:56.278 "name": "BaseBdev4", 00:12:56.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.278 "is_configured": false, 00:12:56.278 "data_offset": 0, 00:12:56.278 "data_size": 0 00:12:56.278 } 00:12:56.278 ] 00:12:56.278 }' 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.278 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.894 [2024-10-01 13:47:06.780901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.894 BaseBdev3 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.894 [ 00:12:56.894 { 00:12:56.894 "name": "BaseBdev3", 00:12:56.894 "aliases": [ 00:12:56.894 "f0dee77b-aff8-4827-95d4-c3dba0938425" 00:12:56.894 ], 00:12:56.894 "product_name": "Malloc disk", 00:12:56.894 "block_size": 512, 00:12:56.894 "num_blocks": 65536, 00:12:56.894 "uuid": "f0dee77b-aff8-4827-95d4-c3dba0938425", 00:12:56.894 "assigned_rate_limits": { 00:12:56.894 "rw_ios_per_sec": 0, 00:12:56.894 "rw_mbytes_per_sec": 0, 00:12:56.894 "r_mbytes_per_sec": 0, 00:12:56.894 "w_mbytes_per_sec": 0 00:12:56.894 }, 00:12:56.894 "claimed": true, 00:12:56.894 "claim_type": "exclusive_write", 00:12:56.894 "zoned": false, 00:12:56.894 "supported_io_types": { 00:12:56.894 "read": true, 00:12:56.894 "write": true, 00:12:56.894 "unmap": true, 00:12:56.894 "flush": true, 00:12:56.894 "reset": true, 00:12:56.894 "nvme_admin": false, 00:12:56.894 "nvme_io": false, 00:12:56.894 "nvme_io_md": false, 00:12:56.894 "write_zeroes": true, 00:12:56.894 "zcopy": true, 00:12:56.894 "get_zone_info": false, 00:12:56.894 "zone_management": false, 00:12:56.894 "zone_append": false, 00:12:56.894 "compare": false, 00:12:56.894 "compare_and_write": false, 00:12:56.894 "abort": true, 00:12:56.894 "seek_hole": false, 00:12:56.894 "seek_data": false, 00:12:56.894 "copy": true, 00:12:56.894 "nvme_iov_md": false 00:12:56.894 }, 00:12:56.894 "memory_domains": [ 00:12:56.894 { 00:12:56.894 "dma_device_id": "system", 00:12:56.894 "dma_device_type": 1 00:12:56.894 }, 00:12:56.894 { 00:12:56.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.894 "dma_device_type": 2 00:12:56.894 } 00:12:56.894 ], 00:12:56.894 "driver_specific": {} 00:12:56.894 } 00:12:56.894 ] 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.894 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.894 "name": "Existed_Raid", 00:12:56.894 "uuid": "08d96063-62e3-4b2e-a294-5a7baea534ea", 00:12:56.894 "strip_size_kb": 64, 00:12:56.894 "state": "configuring", 00:12:56.894 "raid_level": "raid0", 00:12:56.894 "superblock": true, 00:12:56.894 "num_base_bdevs": 4, 00:12:56.894 "num_base_bdevs_discovered": 3, 00:12:56.894 "num_base_bdevs_operational": 4, 00:12:56.894 "base_bdevs_list": [ 00:12:56.894 { 00:12:56.894 "name": "BaseBdev1", 00:12:56.894 "uuid": "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb", 00:12:56.894 "is_configured": true, 00:12:56.894 "data_offset": 2048, 00:12:56.894 "data_size": 63488 00:12:56.894 }, 00:12:56.894 { 00:12:56.894 "name": "BaseBdev2", 00:12:56.894 "uuid": "69d73934-5f2b-4d28-aacc-e5b1c6a1f5f4", 00:12:56.894 "is_configured": true, 00:12:56.894 "data_offset": 2048, 00:12:56.894 "data_size": 63488 00:12:56.894 }, 00:12:56.894 { 00:12:56.895 "name": "BaseBdev3", 00:12:56.895 "uuid": "f0dee77b-aff8-4827-95d4-c3dba0938425", 00:12:56.895 "is_configured": true, 00:12:56.895 "data_offset": 2048, 00:12:56.895 "data_size": 63488 00:12:56.895 }, 00:12:56.895 { 00:12:56.895 "name": "BaseBdev4", 00:12:56.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.895 "is_configured": false, 00:12:56.895 "data_offset": 0, 00:12:56.895 "data_size": 0 00:12:56.895 } 00:12:56.895 ] 00:12:56.895 }' 00:12:56.895 13:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.895 13:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 [2024-10-01 13:47:07.319232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:57.153 [2024-10-01 13:47:07.319562] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.153 [2024-10-01 13:47:07.319580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:57.153 [2024-10-01 13:47:07.319863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:57.153 [2024-10-01 13:47:07.319996] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.153 [2024-10-01 13:47:07.320014] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:57.153 BaseBdev4 00:12:57.153 [2024-10-01 13:47:07.320153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 [ 00:12:57.153 { 00:12:57.412 "name": "BaseBdev4", 00:12:57.412 "aliases": [ 00:12:57.412 "0379fda6-abe3-45e5-9a90-11104bcfbb12" 00:12:57.412 ], 00:12:57.412 "product_name": "Malloc disk", 00:12:57.412 "block_size": 512, 00:12:57.412 "num_blocks": 65536, 00:12:57.412 "uuid": "0379fda6-abe3-45e5-9a90-11104bcfbb12", 00:12:57.412 "assigned_rate_limits": { 00:12:57.412 "rw_ios_per_sec": 0, 00:12:57.412 "rw_mbytes_per_sec": 0, 00:12:57.412 "r_mbytes_per_sec": 0, 00:12:57.412 "w_mbytes_per_sec": 0 00:12:57.412 }, 00:12:57.412 "claimed": true, 00:12:57.412 "claim_type": "exclusive_write", 00:12:57.412 "zoned": false, 00:12:57.412 "supported_io_types": { 00:12:57.412 "read": true, 00:12:57.412 "write": true, 00:12:57.412 "unmap": true, 00:12:57.412 "flush": true, 00:12:57.412 "reset": true, 00:12:57.412 "nvme_admin": false, 00:12:57.412 "nvme_io": false, 00:12:57.412 "nvme_io_md": false, 00:12:57.412 "write_zeroes": true, 00:12:57.412 "zcopy": true, 00:12:57.412 "get_zone_info": false, 00:12:57.412 "zone_management": false, 00:12:57.412 "zone_append": false, 00:12:57.412 "compare": false, 00:12:57.412 "compare_and_write": false, 00:12:57.412 "abort": true, 00:12:57.412 "seek_hole": false, 00:12:57.412 "seek_data": false, 00:12:57.412 "copy": true, 00:12:57.412 "nvme_iov_md": false 00:12:57.412 }, 00:12:57.412 "memory_domains": [ 00:12:57.412 { 00:12:57.412 "dma_device_id": "system", 00:12:57.412 "dma_device_type": 1 00:12:57.412 }, 00:12:57.412 { 00:12:57.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.412 "dma_device_type": 2 00:12:57.412 } 00:12:57.412 ], 00:12:57.412 "driver_specific": {} 00:12:57.412 } 00:12:57.412 ] 00:12:57.412 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.412 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:57.412 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.412 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.412 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:57.412 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.413 "name": "Existed_Raid", 00:12:57.413 "uuid": "08d96063-62e3-4b2e-a294-5a7baea534ea", 00:12:57.413 "strip_size_kb": 64, 00:12:57.413 "state": "online", 00:12:57.413 "raid_level": "raid0", 00:12:57.413 "superblock": true, 00:12:57.413 "num_base_bdevs": 4, 00:12:57.413 "num_base_bdevs_discovered": 4, 00:12:57.413 "num_base_bdevs_operational": 4, 00:12:57.413 "base_bdevs_list": [ 00:12:57.413 { 00:12:57.413 "name": "BaseBdev1", 00:12:57.413 "uuid": "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb", 00:12:57.413 "is_configured": true, 00:12:57.413 "data_offset": 2048, 00:12:57.413 "data_size": 63488 00:12:57.413 }, 00:12:57.413 { 00:12:57.413 "name": "BaseBdev2", 00:12:57.413 "uuid": "69d73934-5f2b-4d28-aacc-e5b1c6a1f5f4", 00:12:57.413 "is_configured": true, 00:12:57.413 "data_offset": 2048, 00:12:57.413 "data_size": 63488 00:12:57.413 }, 00:12:57.413 { 00:12:57.413 "name": "BaseBdev3", 00:12:57.413 "uuid": "f0dee77b-aff8-4827-95d4-c3dba0938425", 00:12:57.413 "is_configured": true, 00:12:57.413 "data_offset": 2048, 00:12:57.413 "data_size": 63488 00:12:57.413 }, 00:12:57.413 { 00:12:57.413 "name": "BaseBdev4", 00:12:57.413 "uuid": "0379fda6-abe3-45e5-9a90-11104bcfbb12", 00:12:57.413 "is_configured": true, 00:12:57.413 "data_offset": 2048, 00:12:57.413 "data_size": 63488 00:12:57.413 } 00:12:57.413 ] 00:12:57.413 }' 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.413 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.672 [2024-10-01 13:47:07.723040] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.672 "name": "Existed_Raid", 00:12:57.672 "aliases": [ 00:12:57.672 "08d96063-62e3-4b2e-a294-5a7baea534ea" 00:12:57.672 ], 00:12:57.672 "product_name": "Raid Volume", 00:12:57.672 "block_size": 512, 00:12:57.672 "num_blocks": 253952, 00:12:57.672 "uuid": "08d96063-62e3-4b2e-a294-5a7baea534ea", 00:12:57.672 "assigned_rate_limits": { 00:12:57.672 "rw_ios_per_sec": 0, 00:12:57.672 "rw_mbytes_per_sec": 0, 00:12:57.672 "r_mbytes_per_sec": 0, 00:12:57.672 "w_mbytes_per_sec": 0 00:12:57.672 }, 00:12:57.672 "claimed": false, 00:12:57.672 "zoned": false, 00:12:57.672 "supported_io_types": { 00:12:57.672 "read": true, 00:12:57.672 "write": true, 00:12:57.672 "unmap": true, 00:12:57.672 "flush": true, 00:12:57.672 "reset": true, 00:12:57.672 "nvme_admin": false, 00:12:57.672 "nvme_io": false, 00:12:57.672 "nvme_io_md": false, 00:12:57.672 "write_zeroes": true, 00:12:57.672 "zcopy": false, 00:12:57.672 "get_zone_info": false, 00:12:57.672 "zone_management": false, 00:12:57.672 "zone_append": false, 00:12:57.672 "compare": false, 00:12:57.672 "compare_and_write": false, 00:12:57.672 "abort": false, 00:12:57.672 "seek_hole": false, 00:12:57.672 "seek_data": false, 00:12:57.672 "copy": false, 00:12:57.672 "nvme_iov_md": false 00:12:57.672 }, 00:12:57.672 "memory_domains": [ 00:12:57.672 { 00:12:57.672 "dma_device_id": "system", 00:12:57.672 "dma_device_type": 1 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.672 "dma_device_type": 2 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "dma_device_id": "system", 00:12:57.672 "dma_device_type": 1 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.672 "dma_device_type": 2 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "dma_device_id": "system", 00:12:57.672 "dma_device_type": 1 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.672 "dma_device_type": 2 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "dma_device_id": "system", 00:12:57.672 "dma_device_type": 1 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.672 "dma_device_type": 2 00:12:57.672 } 00:12:57.672 ], 00:12:57.672 "driver_specific": { 00:12:57.672 "raid": { 00:12:57.672 "uuid": "08d96063-62e3-4b2e-a294-5a7baea534ea", 00:12:57.672 "strip_size_kb": 64, 00:12:57.672 "state": "online", 00:12:57.672 "raid_level": "raid0", 00:12:57.672 "superblock": true, 00:12:57.672 "num_base_bdevs": 4, 00:12:57.672 "num_base_bdevs_discovered": 4, 00:12:57.672 "num_base_bdevs_operational": 4, 00:12:57.672 "base_bdevs_list": [ 00:12:57.672 { 00:12:57.672 "name": "BaseBdev1", 00:12:57.672 "uuid": "fabd6885-4afd-42c8-8c0b-ada7f5cf66cb", 00:12:57.672 "is_configured": true, 00:12:57.672 "data_offset": 2048, 00:12:57.672 "data_size": 63488 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "name": "BaseBdev2", 00:12:57.672 "uuid": "69d73934-5f2b-4d28-aacc-e5b1c6a1f5f4", 00:12:57.672 "is_configured": true, 00:12:57.672 "data_offset": 2048, 00:12:57.672 "data_size": 63488 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "name": "BaseBdev3", 00:12:57.672 "uuid": "f0dee77b-aff8-4827-95d4-c3dba0938425", 00:12:57.672 "is_configured": true, 00:12:57.672 "data_offset": 2048, 00:12:57.672 "data_size": 63488 00:12:57.672 }, 00:12:57.672 { 00:12:57.672 "name": "BaseBdev4", 00:12:57.672 "uuid": "0379fda6-abe3-45e5-9a90-11104bcfbb12", 00:12:57.672 "is_configured": true, 00:12:57.672 "data_offset": 2048, 00:12:57.672 "data_size": 63488 00:12:57.672 } 00:12:57.672 ] 00:12:57.672 } 00:12:57.672 } 00:12:57.672 }' 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:57.672 BaseBdev2 00:12:57.672 BaseBdev3 00:12:57.672 BaseBdev4' 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.672 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.931 13:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.931 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.931 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.931 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.931 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.931 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.931 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.931 [2024-10-01 13:47:08.038479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.931 [2024-10-01 13:47:08.038645] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.931 [2024-10-01 13:47:08.038823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.188 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.189 "name": "Existed_Raid", 00:12:58.189 "uuid": "08d96063-62e3-4b2e-a294-5a7baea534ea", 00:12:58.189 "strip_size_kb": 64, 00:12:58.189 "state": "offline", 00:12:58.189 "raid_level": "raid0", 00:12:58.189 "superblock": true, 00:12:58.189 "num_base_bdevs": 4, 00:12:58.189 "num_base_bdevs_discovered": 3, 00:12:58.189 "num_base_bdevs_operational": 3, 00:12:58.189 "base_bdevs_list": [ 00:12:58.189 { 00:12:58.189 "name": null, 00:12:58.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.189 "is_configured": false, 00:12:58.189 "data_offset": 0, 00:12:58.189 "data_size": 63488 00:12:58.189 }, 00:12:58.189 { 00:12:58.189 "name": "BaseBdev2", 00:12:58.189 "uuid": "69d73934-5f2b-4d28-aacc-e5b1c6a1f5f4", 00:12:58.189 "is_configured": true, 00:12:58.189 "data_offset": 2048, 00:12:58.189 "data_size": 63488 00:12:58.189 }, 00:12:58.189 { 00:12:58.189 "name": "BaseBdev3", 00:12:58.189 "uuid": "f0dee77b-aff8-4827-95d4-c3dba0938425", 00:12:58.189 "is_configured": true, 00:12:58.189 "data_offset": 2048, 00:12:58.189 "data_size": 63488 00:12:58.189 }, 00:12:58.189 { 00:12:58.189 "name": "BaseBdev4", 00:12:58.189 "uuid": "0379fda6-abe3-45e5-9a90-11104bcfbb12", 00:12:58.189 "is_configured": true, 00:12:58.189 "data_offset": 2048, 00:12:58.189 "data_size": 63488 00:12:58.189 } 00:12:58.189 ] 00:12:58.189 }' 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.189 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.446 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.447 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.447 [2024-10-01 13:47:08.637578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.704 [2024-10-01 13:47:08.791907] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.704 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.963 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.963 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.963 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.963 13:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:58.963 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.963 13:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 [2024-10-01 13:47:08.945093] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:58.963 [2024-10-01 13:47:08.945278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 BaseBdev2 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.963 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 [ 00:12:59.221 { 00:12:59.221 "name": "BaseBdev2", 00:12:59.221 "aliases": [ 00:12:59.221 "a385180e-e05b-4279-a61a-d4933fe16887" 00:12:59.221 ], 00:12:59.221 "product_name": "Malloc disk", 00:12:59.221 "block_size": 512, 00:12:59.221 "num_blocks": 65536, 00:12:59.221 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:12:59.221 "assigned_rate_limits": { 00:12:59.221 "rw_ios_per_sec": 0, 00:12:59.221 "rw_mbytes_per_sec": 0, 00:12:59.221 "r_mbytes_per_sec": 0, 00:12:59.221 "w_mbytes_per_sec": 0 00:12:59.221 }, 00:12:59.221 "claimed": false, 00:12:59.221 "zoned": false, 00:12:59.221 "supported_io_types": { 00:12:59.221 "read": true, 00:12:59.221 "write": true, 00:12:59.221 "unmap": true, 00:12:59.221 "flush": true, 00:12:59.221 "reset": true, 00:12:59.221 "nvme_admin": false, 00:12:59.221 "nvme_io": false, 00:12:59.221 "nvme_io_md": false, 00:12:59.221 "write_zeroes": true, 00:12:59.221 "zcopy": true, 00:12:59.221 "get_zone_info": false, 00:12:59.221 "zone_management": false, 00:12:59.221 "zone_append": false, 00:12:59.221 "compare": false, 00:12:59.221 "compare_and_write": false, 00:12:59.221 "abort": true, 00:12:59.221 "seek_hole": false, 00:12:59.221 "seek_data": false, 00:12:59.221 "copy": true, 00:12:59.221 "nvme_iov_md": false 00:12:59.221 }, 00:12:59.221 "memory_domains": [ 00:12:59.221 { 00:12:59.221 "dma_device_id": "system", 00:12:59.221 "dma_device_type": 1 00:12:59.221 }, 00:12:59.221 { 00:12:59.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.221 "dma_device_type": 2 00:12:59.221 } 00:12:59.221 ], 00:12:59.221 "driver_specific": {} 00:12:59.221 } 00:12:59.221 ] 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 BaseBdev3 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 [ 00:12:59.221 { 00:12:59.221 "name": "BaseBdev3", 00:12:59.221 "aliases": [ 00:12:59.221 "5f616bf1-fecf-4290-8483-1ea886c3a6db" 00:12:59.221 ], 00:12:59.221 "product_name": "Malloc disk", 00:12:59.221 "block_size": 512, 00:12:59.221 "num_blocks": 65536, 00:12:59.221 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:12:59.221 "assigned_rate_limits": { 00:12:59.221 "rw_ios_per_sec": 0, 00:12:59.221 "rw_mbytes_per_sec": 0, 00:12:59.221 "r_mbytes_per_sec": 0, 00:12:59.221 "w_mbytes_per_sec": 0 00:12:59.221 }, 00:12:59.221 "claimed": false, 00:12:59.221 "zoned": false, 00:12:59.221 "supported_io_types": { 00:12:59.221 "read": true, 00:12:59.221 "write": true, 00:12:59.221 "unmap": true, 00:12:59.221 "flush": true, 00:12:59.221 "reset": true, 00:12:59.221 "nvme_admin": false, 00:12:59.221 "nvme_io": false, 00:12:59.221 "nvme_io_md": false, 00:12:59.221 "write_zeroes": true, 00:12:59.221 "zcopy": true, 00:12:59.221 "get_zone_info": false, 00:12:59.221 "zone_management": false, 00:12:59.221 "zone_append": false, 00:12:59.221 "compare": false, 00:12:59.221 "compare_and_write": false, 00:12:59.221 "abort": true, 00:12:59.221 "seek_hole": false, 00:12:59.221 "seek_data": false, 00:12:59.221 "copy": true, 00:12:59.221 "nvme_iov_md": false 00:12:59.221 }, 00:12:59.221 "memory_domains": [ 00:12:59.221 { 00:12:59.221 "dma_device_id": "system", 00:12:59.221 "dma_device_type": 1 00:12:59.221 }, 00:12:59.221 { 00:12:59.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.221 "dma_device_type": 2 00:12:59.221 } 00:12:59.221 ], 00:12:59.221 "driver_specific": {} 00:12:59.221 } 00:12:59.221 ] 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.221 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.222 BaseBdev4 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.222 [ 00:12:59.222 { 00:12:59.222 "name": "BaseBdev4", 00:12:59.222 "aliases": [ 00:12:59.222 "837c8880-be36-40fe-a78f-1e6f15227ecb" 00:12:59.222 ], 00:12:59.222 "product_name": "Malloc disk", 00:12:59.222 "block_size": 512, 00:12:59.222 "num_blocks": 65536, 00:12:59.222 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:12:59.222 "assigned_rate_limits": { 00:12:59.222 "rw_ios_per_sec": 0, 00:12:59.222 "rw_mbytes_per_sec": 0, 00:12:59.222 "r_mbytes_per_sec": 0, 00:12:59.222 "w_mbytes_per_sec": 0 00:12:59.222 }, 00:12:59.222 "claimed": false, 00:12:59.222 "zoned": false, 00:12:59.222 "supported_io_types": { 00:12:59.222 "read": true, 00:12:59.222 "write": true, 00:12:59.222 "unmap": true, 00:12:59.222 "flush": true, 00:12:59.222 "reset": true, 00:12:59.222 "nvme_admin": false, 00:12:59.222 "nvme_io": false, 00:12:59.222 "nvme_io_md": false, 00:12:59.222 "write_zeroes": true, 00:12:59.222 "zcopy": true, 00:12:59.222 "get_zone_info": false, 00:12:59.222 "zone_management": false, 00:12:59.222 "zone_append": false, 00:12:59.222 "compare": false, 00:12:59.222 "compare_and_write": false, 00:12:59.222 "abort": true, 00:12:59.222 "seek_hole": false, 00:12:59.222 "seek_data": false, 00:12:59.222 "copy": true, 00:12:59.222 "nvme_iov_md": false 00:12:59.222 }, 00:12:59.222 "memory_domains": [ 00:12:59.222 { 00:12:59.222 "dma_device_id": "system", 00:12:59.222 "dma_device_type": 1 00:12:59.222 }, 00:12:59.222 { 00:12:59.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.222 "dma_device_type": 2 00:12:59.222 } 00:12:59.222 ], 00:12:59.222 "driver_specific": {} 00:12:59.222 } 00:12:59.222 ] 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.222 [2024-10-01 13:47:09.361470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.222 [2024-10-01 13:47:09.361643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.222 [2024-10-01 13:47:09.361689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.222 [2024-10-01 13:47:09.363904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.222 [2024-10-01 13:47:09.363960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.222 "name": "Existed_Raid", 00:12:59.222 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:12:59.222 "strip_size_kb": 64, 00:12:59.222 "state": "configuring", 00:12:59.222 "raid_level": "raid0", 00:12:59.222 "superblock": true, 00:12:59.222 "num_base_bdevs": 4, 00:12:59.222 "num_base_bdevs_discovered": 3, 00:12:59.222 "num_base_bdevs_operational": 4, 00:12:59.222 "base_bdevs_list": [ 00:12:59.222 { 00:12:59.222 "name": "BaseBdev1", 00:12:59.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.222 "is_configured": false, 00:12:59.222 "data_offset": 0, 00:12:59.222 "data_size": 0 00:12:59.222 }, 00:12:59.222 { 00:12:59.222 "name": "BaseBdev2", 00:12:59.222 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:12:59.222 "is_configured": true, 00:12:59.222 "data_offset": 2048, 00:12:59.222 "data_size": 63488 00:12:59.222 }, 00:12:59.222 { 00:12:59.222 "name": "BaseBdev3", 00:12:59.222 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:12:59.222 "is_configured": true, 00:12:59.222 "data_offset": 2048, 00:12:59.222 "data_size": 63488 00:12:59.222 }, 00:12:59.222 { 00:12:59.222 "name": "BaseBdev4", 00:12:59.222 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:12:59.222 "is_configured": true, 00:12:59.222 "data_offset": 2048, 00:12:59.222 "data_size": 63488 00:12:59.222 } 00:12:59.222 ] 00:12:59.222 }' 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.222 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 [2024-10-01 13:47:09.776837] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.788 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.789 "name": "Existed_Raid", 00:12:59.789 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:12:59.789 "strip_size_kb": 64, 00:12:59.789 "state": "configuring", 00:12:59.789 "raid_level": "raid0", 00:12:59.789 "superblock": true, 00:12:59.789 "num_base_bdevs": 4, 00:12:59.789 "num_base_bdevs_discovered": 2, 00:12:59.789 "num_base_bdevs_operational": 4, 00:12:59.789 "base_bdevs_list": [ 00:12:59.789 { 00:12:59.789 "name": "BaseBdev1", 00:12:59.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.789 "is_configured": false, 00:12:59.789 "data_offset": 0, 00:12:59.789 "data_size": 0 00:12:59.789 }, 00:12:59.789 { 00:12:59.789 "name": null, 00:12:59.789 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:12:59.789 "is_configured": false, 00:12:59.789 "data_offset": 0, 00:12:59.789 "data_size": 63488 00:12:59.789 }, 00:12:59.789 { 00:12:59.789 "name": "BaseBdev3", 00:12:59.789 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:12:59.789 "is_configured": true, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 }, 00:12:59.789 { 00:12:59.789 "name": "BaseBdev4", 00:12:59.789 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:12:59.789 "is_configured": true, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 } 00:12:59.789 ] 00:12:59.789 }' 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.789 13:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.047 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.306 [2024-10-01 13:47:10.270824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.306 BaseBdev1 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.306 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.306 [ 00:13:00.306 { 00:13:00.306 "name": "BaseBdev1", 00:13:00.306 "aliases": [ 00:13:00.306 "cb80ea3e-5ad8-4259-a9a5-963055a40054" 00:13:00.306 ], 00:13:00.306 "product_name": "Malloc disk", 00:13:00.306 "block_size": 512, 00:13:00.306 "num_blocks": 65536, 00:13:00.306 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:00.306 "assigned_rate_limits": { 00:13:00.306 "rw_ios_per_sec": 0, 00:13:00.306 "rw_mbytes_per_sec": 0, 00:13:00.306 "r_mbytes_per_sec": 0, 00:13:00.306 "w_mbytes_per_sec": 0 00:13:00.306 }, 00:13:00.306 "claimed": true, 00:13:00.306 "claim_type": "exclusive_write", 00:13:00.306 "zoned": false, 00:13:00.306 "supported_io_types": { 00:13:00.306 "read": true, 00:13:00.306 "write": true, 00:13:00.306 "unmap": true, 00:13:00.306 "flush": true, 00:13:00.306 "reset": true, 00:13:00.306 "nvme_admin": false, 00:13:00.306 "nvme_io": false, 00:13:00.306 "nvme_io_md": false, 00:13:00.306 "write_zeroes": true, 00:13:00.306 "zcopy": true, 00:13:00.306 "get_zone_info": false, 00:13:00.306 "zone_management": false, 00:13:00.306 "zone_append": false, 00:13:00.306 "compare": false, 00:13:00.306 "compare_and_write": false, 00:13:00.306 "abort": true, 00:13:00.306 "seek_hole": false, 00:13:00.306 "seek_data": false, 00:13:00.306 "copy": true, 00:13:00.306 "nvme_iov_md": false 00:13:00.306 }, 00:13:00.306 "memory_domains": [ 00:13:00.306 { 00:13:00.306 "dma_device_id": "system", 00:13:00.306 "dma_device_type": 1 00:13:00.306 }, 00:13:00.306 { 00:13:00.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.306 "dma_device_type": 2 00:13:00.306 } 00:13:00.306 ], 00:13:00.306 "driver_specific": {} 00:13:00.306 } 00:13:00.306 ] 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.307 "name": "Existed_Raid", 00:13:00.307 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:00.307 "strip_size_kb": 64, 00:13:00.307 "state": "configuring", 00:13:00.307 "raid_level": "raid0", 00:13:00.307 "superblock": true, 00:13:00.307 "num_base_bdevs": 4, 00:13:00.307 "num_base_bdevs_discovered": 3, 00:13:00.307 "num_base_bdevs_operational": 4, 00:13:00.307 "base_bdevs_list": [ 00:13:00.307 { 00:13:00.307 "name": "BaseBdev1", 00:13:00.307 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:00.307 "is_configured": true, 00:13:00.307 "data_offset": 2048, 00:13:00.307 "data_size": 63488 00:13:00.307 }, 00:13:00.307 { 00:13:00.307 "name": null, 00:13:00.307 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:13:00.307 "is_configured": false, 00:13:00.307 "data_offset": 0, 00:13:00.307 "data_size": 63488 00:13:00.307 }, 00:13:00.307 { 00:13:00.307 "name": "BaseBdev3", 00:13:00.307 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:13:00.307 "is_configured": true, 00:13:00.307 "data_offset": 2048, 00:13:00.307 "data_size": 63488 00:13:00.307 }, 00:13:00.307 { 00:13:00.307 "name": "BaseBdev4", 00:13:00.307 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:13:00.307 "is_configured": true, 00:13:00.307 "data_offset": 2048, 00:13:00.307 "data_size": 63488 00:13:00.307 } 00:13:00.307 ] 00:13:00.307 }' 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.307 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.565 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.565 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.565 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.565 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.565 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.824 [2024-10-01 13:47:10.782562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.824 "name": "Existed_Raid", 00:13:00.824 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:00.824 "strip_size_kb": 64, 00:13:00.824 "state": "configuring", 00:13:00.824 "raid_level": "raid0", 00:13:00.824 "superblock": true, 00:13:00.824 "num_base_bdevs": 4, 00:13:00.824 "num_base_bdevs_discovered": 2, 00:13:00.824 "num_base_bdevs_operational": 4, 00:13:00.824 "base_bdevs_list": [ 00:13:00.824 { 00:13:00.824 "name": "BaseBdev1", 00:13:00.824 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:00.824 "is_configured": true, 00:13:00.824 "data_offset": 2048, 00:13:00.824 "data_size": 63488 00:13:00.824 }, 00:13:00.824 { 00:13:00.824 "name": null, 00:13:00.824 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:13:00.824 "is_configured": false, 00:13:00.824 "data_offset": 0, 00:13:00.824 "data_size": 63488 00:13:00.824 }, 00:13:00.824 { 00:13:00.824 "name": null, 00:13:00.824 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:13:00.824 "is_configured": false, 00:13:00.824 "data_offset": 0, 00:13:00.824 "data_size": 63488 00:13:00.824 }, 00:13:00.824 { 00:13:00.824 "name": "BaseBdev4", 00:13:00.824 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:13:00.824 "is_configured": true, 00:13:00.824 "data_offset": 2048, 00:13:00.824 "data_size": 63488 00:13:00.824 } 00:13:00.824 ] 00:13:00.824 }' 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.824 13:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 [2024-10-01 13:47:11.222468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.082 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.340 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.340 "name": "Existed_Raid", 00:13:01.340 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:01.340 "strip_size_kb": 64, 00:13:01.340 "state": "configuring", 00:13:01.340 "raid_level": "raid0", 00:13:01.340 "superblock": true, 00:13:01.340 "num_base_bdevs": 4, 00:13:01.340 "num_base_bdevs_discovered": 3, 00:13:01.340 "num_base_bdevs_operational": 4, 00:13:01.340 "base_bdevs_list": [ 00:13:01.340 { 00:13:01.340 "name": "BaseBdev1", 00:13:01.340 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:01.340 "is_configured": true, 00:13:01.340 "data_offset": 2048, 00:13:01.340 "data_size": 63488 00:13:01.340 }, 00:13:01.340 { 00:13:01.340 "name": null, 00:13:01.340 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:13:01.340 "is_configured": false, 00:13:01.340 "data_offset": 0, 00:13:01.340 "data_size": 63488 00:13:01.340 }, 00:13:01.340 { 00:13:01.340 "name": "BaseBdev3", 00:13:01.340 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:13:01.340 "is_configured": true, 00:13:01.340 "data_offset": 2048, 00:13:01.340 "data_size": 63488 00:13:01.340 }, 00:13:01.340 { 00:13:01.340 "name": "BaseBdev4", 00:13:01.340 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:13:01.340 "is_configured": true, 00:13:01.340 "data_offset": 2048, 00:13:01.340 "data_size": 63488 00:13:01.340 } 00:13:01.340 ] 00:13:01.340 }' 00:13:01.340 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.340 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.598 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.598 [2024-10-01 13:47:11.705773] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.857 "name": "Existed_Raid", 00:13:01.857 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:01.857 "strip_size_kb": 64, 00:13:01.857 "state": "configuring", 00:13:01.857 "raid_level": "raid0", 00:13:01.857 "superblock": true, 00:13:01.857 "num_base_bdevs": 4, 00:13:01.857 "num_base_bdevs_discovered": 2, 00:13:01.857 "num_base_bdevs_operational": 4, 00:13:01.857 "base_bdevs_list": [ 00:13:01.857 { 00:13:01.857 "name": null, 00:13:01.857 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:01.857 "is_configured": false, 00:13:01.857 "data_offset": 0, 00:13:01.857 "data_size": 63488 00:13:01.857 }, 00:13:01.857 { 00:13:01.857 "name": null, 00:13:01.857 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:13:01.857 "is_configured": false, 00:13:01.857 "data_offset": 0, 00:13:01.857 "data_size": 63488 00:13:01.857 }, 00:13:01.857 { 00:13:01.857 "name": "BaseBdev3", 00:13:01.857 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:13:01.857 "is_configured": true, 00:13:01.857 "data_offset": 2048, 00:13:01.857 "data_size": 63488 00:13:01.857 }, 00:13:01.857 { 00:13:01.857 "name": "BaseBdev4", 00:13:01.857 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:13:01.857 "is_configured": true, 00:13:01.857 "data_offset": 2048, 00:13:01.857 "data_size": 63488 00:13:01.857 } 00:13:01.857 ] 00:13:01.857 }' 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.857 13:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 [2024-10-01 13:47:12.228727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.116 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.116 "name": "Existed_Raid", 00:13:02.116 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:02.116 "strip_size_kb": 64, 00:13:02.116 "state": "configuring", 00:13:02.116 "raid_level": "raid0", 00:13:02.116 "superblock": true, 00:13:02.116 "num_base_bdevs": 4, 00:13:02.116 "num_base_bdevs_discovered": 3, 00:13:02.116 "num_base_bdevs_operational": 4, 00:13:02.116 "base_bdevs_list": [ 00:13:02.116 { 00:13:02.116 "name": null, 00:13:02.116 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:02.116 "is_configured": false, 00:13:02.116 "data_offset": 0, 00:13:02.116 "data_size": 63488 00:13:02.116 }, 00:13:02.116 { 00:13:02.116 "name": "BaseBdev2", 00:13:02.116 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:13:02.116 "is_configured": true, 00:13:02.116 "data_offset": 2048, 00:13:02.116 "data_size": 63488 00:13:02.116 }, 00:13:02.116 { 00:13:02.116 "name": "BaseBdev3", 00:13:02.116 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:13:02.116 "is_configured": true, 00:13:02.116 "data_offset": 2048, 00:13:02.116 "data_size": 63488 00:13:02.116 }, 00:13:02.116 { 00:13:02.117 "name": "BaseBdev4", 00:13:02.117 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:13:02.117 "is_configured": true, 00:13:02.117 "data_offset": 2048, 00:13:02.117 "data_size": 63488 00:13:02.117 } 00:13:02.117 ] 00:13:02.117 }' 00:13:02.117 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.117 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cb80ea3e-5ad8-4259-a9a5-963055a40054 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.683 [2024-10-01 13:47:12.774288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:02.683 [2024-10-01 13:47:12.774602] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:02.683 [2024-10-01 13:47:12.774619] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:02.683 NewBaseBdev 00:13:02.683 [2024-10-01 13:47:12.774904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:02.683 [2024-10-01 13:47:12.775051] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:02.683 [2024-10-01 13:47:12.775066] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:02.683 [2024-10-01 13:47:12.775207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.683 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.683 [ 00:13:02.683 { 00:13:02.683 "name": "NewBaseBdev", 00:13:02.683 "aliases": [ 00:13:02.683 "cb80ea3e-5ad8-4259-a9a5-963055a40054" 00:13:02.683 ], 00:13:02.683 "product_name": "Malloc disk", 00:13:02.683 "block_size": 512, 00:13:02.683 "num_blocks": 65536, 00:13:02.683 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:02.683 "assigned_rate_limits": { 00:13:02.683 "rw_ios_per_sec": 0, 00:13:02.683 "rw_mbytes_per_sec": 0, 00:13:02.683 "r_mbytes_per_sec": 0, 00:13:02.683 "w_mbytes_per_sec": 0 00:13:02.683 }, 00:13:02.683 "claimed": true, 00:13:02.683 "claim_type": "exclusive_write", 00:13:02.683 "zoned": false, 00:13:02.683 "supported_io_types": { 00:13:02.683 "read": true, 00:13:02.683 "write": true, 00:13:02.683 "unmap": true, 00:13:02.683 "flush": true, 00:13:02.683 "reset": true, 00:13:02.683 "nvme_admin": false, 00:13:02.683 "nvme_io": false, 00:13:02.684 "nvme_io_md": false, 00:13:02.684 "write_zeroes": true, 00:13:02.684 "zcopy": true, 00:13:02.684 "get_zone_info": false, 00:13:02.684 "zone_management": false, 00:13:02.684 "zone_append": false, 00:13:02.684 "compare": false, 00:13:02.684 "compare_and_write": false, 00:13:02.684 "abort": true, 00:13:02.684 "seek_hole": false, 00:13:02.684 "seek_data": false, 00:13:02.684 "copy": true, 00:13:02.684 "nvme_iov_md": false 00:13:02.684 }, 00:13:02.684 "memory_domains": [ 00:13:02.684 { 00:13:02.684 "dma_device_id": "system", 00:13:02.684 "dma_device_type": 1 00:13:02.684 }, 00:13:02.684 { 00:13:02.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.684 "dma_device_type": 2 00:13:02.684 } 00:13:02.684 ], 00:13:02.684 "driver_specific": {} 00:13:02.684 } 00:13:02.684 ] 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.684 "name": "Existed_Raid", 00:13:02.684 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:02.684 "strip_size_kb": 64, 00:13:02.684 "state": "online", 00:13:02.684 "raid_level": "raid0", 00:13:02.684 "superblock": true, 00:13:02.684 "num_base_bdevs": 4, 00:13:02.684 "num_base_bdevs_discovered": 4, 00:13:02.684 "num_base_bdevs_operational": 4, 00:13:02.684 "base_bdevs_list": [ 00:13:02.684 { 00:13:02.684 "name": "NewBaseBdev", 00:13:02.684 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:02.684 "is_configured": true, 00:13:02.684 "data_offset": 2048, 00:13:02.684 "data_size": 63488 00:13:02.684 }, 00:13:02.684 { 00:13:02.684 "name": "BaseBdev2", 00:13:02.684 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:13:02.684 "is_configured": true, 00:13:02.684 "data_offset": 2048, 00:13:02.684 "data_size": 63488 00:13:02.684 }, 00:13:02.684 { 00:13:02.684 "name": "BaseBdev3", 00:13:02.684 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:13:02.684 "is_configured": true, 00:13:02.684 "data_offset": 2048, 00:13:02.684 "data_size": 63488 00:13:02.684 }, 00:13:02.684 { 00:13:02.684 "name": "BaseBdev4", 00:13:02.684 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:13:02.684 "is_configured": true, 00:13:02.684 "data_offset": 2048, 00:13:02.684 "data_size": 63488 00:13:02.684 } 00:13:02.684 ] 00:13:02.684 }' 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.684 13:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.249 [2024-10-01 13:47:13.273986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.249 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.249 "name": "Existed_Raid", 00:13:03.249 "aliases": [ 00:13:03.249 "f47e36fe-652a-4cd6-90fe-dc0434727d47" 00:13:03.249 ], 00:13:03.249 "product_name": "Raid Volume", 00:13:03.249 "block_size": 512, 00:13:03.249 "num_blocks": 253952, 00:13:03.249 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:03.249 "assigned_rate_limits": { 00:13:03.249 "rw_ios_per_sec": 0, 00:13:03.249 "rw_mbytes_per_sec": 0, 00:13:03.249 "r_mbytes_per_sec": 0, 00:13:03.249 "w_mbytes_per_sec": 0 00:13:03.250 }, 00:13:03.250 "claimed": false, 00:13:03.250 "zoned": false, 00:13:03.250 "supported_io_types": { 00:13:03.250 "read": true, 00:13:03.250 "write": true, 00:13:03.250 "unmap": true, 00:13:03.250 "flush": true, 00:13:03.250 "reset": true, 00:13:03.250 "nvme_admin": false, 00:13:03.250 "nvme_io": false, 00:13:03.250 "nvme_io_md": false, 00:13:03.250 "write_zeroes": true, 00:13:03.250 "zcopy": false, 00:13:03.250 "get_zone_info": false, 00:13:03.250 "zone_management": false, 00:13:03.250 "zone_append": false, 00:13:03.250 "compare": false, 00:13:03.250 "compare_and_write": false, 00:13:03.250 "abort": false, 00:13:03.250 "seek_hole": false, 00:13:03.250 "seek_data": false, 00:13:03.250 "copy": false, 00:13:03.250 "nvme_iov_md": false 00:13:03.250 }, 00:13:03.250 "memory_domains": [ 00:13:03.250 { 00:13:03.250 "dma_device_id": "system", 00:13:03.250 "dma_device_type": 1 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.250 "dma_device_type": 2 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "dma_device_id": "system", 00:13:03.250 "dma_device_type": 1 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.250 "dma_device_type": 2 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "dma_device_id": "system", 00:13:03.250 "dma_device_type": 1 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.250 "dma_device_type": 2 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "dma_device_id": "system", 00:13:03.250 "dma_device_type": 1 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.250 "dma_device_type": 2 00:13:03.250 } 00:13:03.250 ], 00:13:03.250 "driver_specific": { 00:13:03.250 "raid": { 00:13:03.250 "uuid": "f47e36fe-652a-4cd6-90fe-dc0434727d47", 00:13:03.250 "strip_size_kb": 64, 00:13:03.250 "state": "online", 00:13:03.250 "raid_level": "raid0", 00:13:03.250 "superblock": true, 00:13:03.250 "num_base_bdevs": 4, 00:13:03.250 "num_base_bdevs_discovered": 4, 00:13:03.250 "num_base_bdevs_operational": 4, 00:13:03.250 "base_bdevs_list": [ 00:13:03.250 { 00:13:03.250 "name": "NewBaseBdev", 00:13:03.250 "uuid": "cb80ea3e-5ad8-4259-a9a5-963055a40054", 00:13:03.250 "is_configured": true, 00:13:03.250 "data_offset": 2048, 00:13:03.250 "data_size": 63488 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "name": "BaseBdev2", 00:13:03.250 "uuid": "a385180e-e05b-4279-a61a-d4933fe16887", 00:13:03.250 "is_configured": true, 00:13:03.250 "data_offset": 2048, 00:13:03.250 "data_size": 63488 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "name": "BaseBdev3", 00:13:03.250 "uuid": "5f616bf1-fecf-4290-8483-1ea886c3a6db", 00:13:03.250 "is_configured": true, 00:13:03.250 "data_offset": 2048, 00:13:03.250 "data_size": 63488 00:13:03.250 }, 00:13:03.250 { 00:13:03.250 "name": "BaseBdev4", 00:13:03.250 "uuid": "837c8880-be36-40fe-a78f-1e6f15227ecb", 00:13:03.250 "is_configured": true, 00:13:03.250 "data_offset": 2048, 00:13:03.250 "data_size": 63488 00:13:03.250 } 00:13:03.250 ] 00:13:03.250 } 00:13:03.250 } 00:13:03.250 }' 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:03.250 BaseBdev2 00:13:03.250 BaseBdev3 00:13:03.250 BaseBdev4' 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.250 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 [2024-10-01 13:47:13.601225] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.509 [2024-10-01 13:47:13.601380] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.509 [2024-10-01 13:47:13.601495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.509 [2024-10-01 13:47:13.601564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.509 [2024-10-01 13:47:13.601577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69939 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 69939 ']' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 69939 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69939 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:03.509 killing process with pid 69939 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69939' 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 69939 00:13:03.509 [2024-10-01 13:47:13.654720] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.509 13:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 69939 00:13:04.075 [2024-10-01 13:47:14.057548] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.449 13:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:05.449 ************************************ 00:13:05.449 END TEST raid_state_function_test_sb 00:13:05.449 ************************************ 00:13:05.449 00:13:05.449 real 0m11.472s 00:13:05.449 user 0m17.956s 00:13:05.449 sys 0m2.361s 00:13:05.449 13:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.449 13:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.449 13:47:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:05.449 13:47:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:05.450 13:47:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.450 13:47:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.450 ************************************ 00:13:05.450 START TEST raid_superblock_test 00:13:05.450 ************************************ 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70613 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70613 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70613 ']' 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.450 13:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.450 [2024-10-01 13:47:15.510807] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:05.450 [2024-10-01 13:47:15.510939] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70613 ] 00:13:05.708 [2024-10-01 13:47:15.681299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.708 [2024-10-01 13:47:15.895808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.968 [2024-10-01 13:47:16.108581] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.968 [2024-10-01 13:47:16.108645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.227 malloc1 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.227 [2024-10-01 13:47:16.403558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:06.227 [2024-10-01 13:47:16.403632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.227 [2024-10-01 13:47:16.403658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.227 [2024-10-01 13:47:16.403673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.227 [2024-10-01 13:47:16.406012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.227 [2024-10-01 13:47:16.406052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:06.227 pt1 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.227 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.486 malloc2 00:13:06.486 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.487 [2024-10-01 13:47:16.461705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.487 [2024-10-01 13:47:16.461878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.487 [2024-10-01 13:47:16.461912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.487 [2024-10-01 13:47:16.461925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.487 [2024-10-01 13:47:16.464239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.487 [2024-10-01 13:47:16.464278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.487 pt2 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.487 malloc3 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.487 [2024-10-01 13:47:16.514972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:06.487 [2024-10-01 13:47:16.515024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.487 [2024-10-01 13:47:16.515047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:06.487 [2024-10-01 13:47:16.515058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.487 [2024-10-01 13:47:16.517337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.487 [2024-10-01 13:47:16.517494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:06.487 pt3 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.487 malloc4 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.487 [2024-10-01 13:47:16.572219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:06.487 [2024-10-01 13:47:16.572372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.487 [2024-10-01 13:47:16.572442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:06.487 [2024-10-01 13:47:16.572520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.487 [2024-10-01 13:47:16.574863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.487 [2024-10-01 13:47:16.574993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:06.487 pt4 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.487 [2024-10-01 13:47:16.584263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:06.487 [2024-10-01 13:47:16.586298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.487 [2024-10-01 13:47:16.586362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.487 [2024-10-01 13:47:16.586441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:06.487 [2024-10-01 13:47:16.586615] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:06.487 [2024-10-01 13:47:16.586633] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:06.487 [2024-10-01 13:47:16.586893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:06.487 [2024-10-01 13:47:16.587046] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:06.487 [2024-10-01 13:47:16.587061] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:06.487 [2024-10-01 13:47:16.587214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.487 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.488 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.488 "name": "raid_bdev1", 00:13:06.488 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:06.488 "strip_size_kb": 64, 00:13:06.488 "state": "online", 00:13:06.488 "raid_level": "raid0", 00:13:06.488 "superblock": true, 00:13:06.488 "num_base_bdevs": 4, 00:13:06.488 "num_base_bdevs_discovered": 4, 00:13:06.488 "num_base_bdevs_operational": 4, 00:13:06.488 "base_bdevs_list": [ 00:13:06.488 { 00:13:06.488 "name": "pt1", 00:13:06.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.488 "is_configured": true, 00:13:06.488 "data_offset": 2048, 00:13:06.488 "data_size": 63488 00:13:06.488 }, 00:13:06.488 { 00:13:06.488 "name": "pt2", 00:13:06.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.488 "is_configured": true, 00:13:06.488 "data_offset": 2048, 00:13:06.488 "data_size": 63488 00:13:06.488 }, 00:13:06.488 { 00:13:06.488 "name": "pt3", 00:13:06.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.488 "is_configured": true, 00:13:06.488 "data_offset": 2048, 00:13:06.488 "data_size": 63488 00:13:06.488 }, 00:13:06.488 { 00:13:06.488 "name": "pt4", 00:13:06.488 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:06.488 "is_configured": true, 00:13:06.488 "data_offset": 2048, 00:13:06.488 "data_size": 63488 00:13:06.488 } 00:13:06.488 ] 00:13:06.488 }' 00:13:06.488 13:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.488 13:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.054 [2024-10-01 13:47:17.011947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.054 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.055 "name": "raid_bdev1", 00:13:07.055 "aliases": [ 00:13:07.055 "4e2610aa-ded9-4dd7-85d8-83780332c3e2" 00:13:07.055 ], 00:13:07.055 "product_name": "Raid Volume", 00:13:07.055 "block_size": 512, 00:13:07.055 "num_blocks": 253952, 00:13:07.055 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:07.055 "assigned_rate_limits": { 00:13:07.055 "rw_ios_per_sec": 0, 00:13:07.055 "rw_mbytes_per_sec": 0, 00:13:07.055 "r_mbytes_per_sec": 0, 00:13:07.055 "w_mbytes_per_sec": 0 00:13:07.055 }, 00:13:07.055 "claimed": false, 00:13:07.055 "zoned": false, 00:13:07.055 "supported_io_types": { 00:13:07.055 "read": true, 00:13:07.055 "write": true, 00:13:07.055 "unmap": true, 00:13:07.055 "flush": true, 00:13:07.055 "reset": true, 00:13:07.055 "nvme_admin": false, 00:13:07.055 "nvme_io": false, 00:13:07.055 "nvme_io_md": false, 00:13:07.055 "write_zeroes": true, 00:13:07.055 "zcopy": false, 00:13:07.055 "get_zone_info": false, 00:13:07.055 "zone_management": false, 00:13:07.055 "zone_append": false, 00:13:07.055 "compare": false, 00:13:07.055 "compare_and_write": false, 00:13:07.055 "abort": false, 00:13:07.055 "seek_hole": false, 00:13:07.055 "seek_data": false, 00:13:07.055 "copy": false, 00:13:07.055 "nvme_iov_md": false 00:13:07.055 }, 00:13:07.055 "memory_domains": [ 00:13:07.055 { 00:13:07.055 "dma_device_id": "system", 00:13:07.055 "dma_device_type": 1 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.055 "dma_device_type": 2 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "dma_device_id": "system", 00:13:07.055 "dma_device_type": 1 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.055 "dma_device_type": 2 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "dma_device_id": "system", 00:13:07.055 "dma_device_type": 1 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.055 "dma_device_type": 2 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "dma_device_id": "system", 00:13:07.055 "dma_device_type": 1 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.055 "dma_device_type": 2 00:13:07.055 } 00:13:07.055 ], 00:13:07.055 "driver_specific": { 00:13:07.055 "raid": { 00:13:07.055 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:07.055 "strip_size_kb": 64, 00:13:07.055 "state": "online", 00:13:07.055 "raid_level": "raid0", 00:13:07.055 "superblock": true, 00:13:07.055 "num_base_bdevs": 4, 00:13:07.055 "num_base_bdevs_discovered": 4, 00:13:07.055 "num_base_bdevs_operational": 4, 00:13:07.055 "base_bdevs_list": [ 00:13:07.055 { 00:13:07.055 "name": "pt1", 00:13:07.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.055 "is_configured": true, 00:13:07.055 "data_offset": 2048, 00:13:07.055 "data_size": 63488 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "name": "pt2", 00:13:07.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.055 "is_configured": true, 00:13:07.055 "data_offset": 2048, 00:13:07.055 "data_size": 63488 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "name": "pt3", 00:13:07.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.055 "is_configured": true, 00:13:07.055 "data_offset": 2048, 00:13:07.055 "data_size": 63488 00:13:07.055 }, 00:13:07.055 { 00:13:07.055 "name": "pt4", 00:13:07.055 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:07.055 "is_configured": true, 00:13:07.055 "data_offset": 2048, 00:13:07.055 "data_size": 63488 00:13:07.055 } 00:13:07.055 ] 00:13:07.055 } 00:13:07.055 } 00:13:07.055 }' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:07.055 pt2 00:13:07.055 pt3 00:13:07.055 pt4' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.055 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.314 [2024-10-01 13:47:17.331860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4e2610aa-ded9-4dd7-85d8-83780332c3e2 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4e2610aa-ded9-4dd7-85d8-83780332c3e2 ']' 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.314 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.314 [2024-10-01 13:47:17.375583] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.315 [2024-10-01 13:47:17.375617] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.315 [2024-10-01 13:47:17.375706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.315 [2024-10-01 13:47:17.375778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.315 [2024-10-01 13:47:17.375796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.575 [2024-10-01 13:47:17.539459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:07.575 [2024-10-01 13:47:17.541721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:07.575 [2024-10-01 13:47:17.541771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:07.575 [2024-10-01 13:47:17.541806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:07.575 [2024-10-01 13:47:17.541859] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:07.575 [2024-10-01 13:47:17.541915] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:07.575 [2024-10-01 13:47:17.541938] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:07.575 [2024-10-01 13:47:17.541960] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:07.575 [2024-10-01 13:47:17.541976] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.575 [2024-10-01 13:47:17.541989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:07.575 request: 00:13:07.575 { 00:13:07.575 "name": "raid_bdev1", 00:13:07.575 "raid_level": "raid0", 00:13:07.575 "base_bdevs": [ 00:13:07.575 "malloc1", 00:13:07.575 "malloc2", 00:13:07.575 "malloc3", 00:13:07.575 "malloc4" 00:13:07.575 ], 00:13:07.575 "strip_size_kb": 64, 00:13:07.575 "superblock": false, 00:13:07.575 "method": "bdev_raid_create", 00:13:07.575 "req_id": 1 00:13:07.575 } 00:13:07.575 Got JSON-RPC error response 00:13:07.575 response: 00:13:07.575 { 00:13:07.575 "code": -17, 00:13:07.575 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:07.575 } 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.575 [2024-10-01 13:47:17.599352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:07.575 [2024-10-01 13:47:17.599571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.575 [2024-10-01 13:47:17.599602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:07.575 [2024-10-01 13:47:17.599617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.575 [2024-10-01 13:47:17.602116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.575 [2024-10-01 13:47:17.602165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:07.575 [2024-10-01 13:47:17.602260] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:07.575 [2024-10-01 13:47:17.602326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:07.575 pt1 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.575 "name": "raid_bdev1", 00:13:07.575 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:07.575 "strip_size_kb": 64, 00:13:07.575 "state": "configuring", 00:13:07.575 "raid_level": "raid0", 00:13:07.575 "superblock": true, 00:13:07.575 "num_base_bdevs": 4, 00:13:07.575 "num_base_bdevs_discovered": 1, 00:13:07.575 "num_base_bdevs_operational": 4, 00:13:07.575 "base_bdevs_list": [ 00:13:07.575 { 00:13:07.575 "name": "pt1", 00:13:07.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.575 "is_configured": true, 00:13:07.575 "data_offset": 2048, 00:13:07.575 "data_size": 63488 00:13:07.575 }, 00:13:07.575 { 00:13:07.575 "name": null, 00:13:07.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.575 "is_configured": false, 00:13:07.575 "data_offset": 2048, 00:13:07.575 "data_size": 63488 00:13:07.575 }, 00:13:07.575 { 00:13:07.575 "name": null, 00:13:07.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.575 "is_configured": false, 00:13:07.575 "data_offset": 2048, 00:13:07.575 "data_size": 63488 00:13:07.575 }, 00:13:07.575 { 00:13:07.575 "name": null, 00:13:07.575 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:07.575 "is_configured": false, 00:13:07.575 "data_offset": 2048, 00:13:07.575 "data_size": 63488 00:13:07.575 } 00:13:07.575 ] 00:13:07.575 }' 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.575 13:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.143 [2024-10-01 13:47:18.038703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:08.143 [2024-10-01 13:47:18.038906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.143 [2024-10-01 13:47:18.038936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:08.143 [2024-10-01 13:47:18.038951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.143 [2024-10-01 13:47:18.039449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.143 [2024-10-01 13:47:18.039477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:08.143 [2024-10-01 13:47:18.039566] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:08.143 [2024-10-01 13:47:18.039594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.143 pt2 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.143 [2024-10-01 13:47:18.050685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.143 "name": "raid_bdev1", 00:13:08.143 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:08.143 "strip_size_kb": 64, 00:13:08.143 "state": "configuring", 00:13:08.143 "raid_level": "raid0", 00:13:08.143 "superblock": true, 00:13:08.143 "num_base_bdevs": 4, 00:13:08.143 "num_base_bdevs_discovered": 1, 00:13:08.143 "num_base_bdevs_operational": 4, 00:13:08.143 "base_bdevs_list": [ 00:13:08.143 { 00:13:08.143 "name": "pt1", 00:13:08.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.143 "is_configured": true, 00:13:08.143 "data_offset": 2048, 00:13:08.143 "data_size": 63488 00:13:08.143 }, 00:13:08.143 { 00:13:08.143 "name": null, 00:13:08.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.143 "is_configured": false, 00:13:08.143 "data_offset": 0, 00:13:08.143 "data_size": 63488 00:13:08.143 }, 00:13:08.143 { 00:13:08.143 "name": null, 00:13:08.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.143 "is_configured": false, 00:13:08.143 "data_offset": 2048, 00:13:08.143 "data_size": 63488 00:13:08.143 }, 00:13:08.143 { 00:13:08.143 "name": null, 00:13:08.143 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:08.143 "is_configured": false, 00:13:08.143 "data_offset": 2048, 00:13:08.143 "data_size": 63488 00:13:08.143 } 00:13:08.143 ] 00:13:08.143 }' 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.143 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.402 [2024-10-01 13:47:18.486563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:08.402 [2024-10-01 13:47:18.486631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.402 [2024-10-01 13:47:18.486654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:08.402 [2024-10-01 13:47:18.486666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.402 [2024-10-01 13:47:18.487133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.402 [2024-10-01 13:47:18.487153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:08.402 [2024-10-01 13:47:18.487240] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:08.402 [2024-10-01 13:47:18.487263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.402 pt2 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.402 [2024-10-01 13:47:18.498534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:08.402 [2024-10-01 13:47:18.498707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.402 [2024-10-01 13:47:18.498743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:08.402 [2024-10-01 13:47:18.498755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.402 [2024-10-01 13:47:18.499122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.402 [2024-10-01 13:47:18.499139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:08.402 [2024-10-01 13:47:18.499204] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:08.402 [2024-10-01 13:47:18.499223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:08.402 pt3 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.402 [2024-10-01 13:47:18.510496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:08.402 [2024-10-01 13:47:18.510546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.402 [2024-10-01 13:47:18.510566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:08.402 [2024-10-01 13:47:18.510577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.402 [2024-10-01 13:47:18.510928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.402 [2024-10-01 13:47:18.510945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:08.402 [2024-10-01 13:47:18.511006] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:08.402 [2024-10-01 13:47:18.511024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:08.402 [2024-10-01 13:47:18.511153] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:08.402 [2024-10-01 13:47:18.511162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:08.402 [2024-10-01 13:47:18.511432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:08.402 [2024-10-01 13:47:18.511565] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:08.402 [2024-10-01 13:47:18.511579] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:08.402 [2024-10-01 13:47:18.511706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.402 pt4 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:08.402 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.403 "name": "raid_bdev1", 00:13:08.403 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:08.403 "strip_size_kb": 64, 00:13:08.403 "state": "online", 00:13:08.403 "raid_level": "raid0", 00:13:08.403 "superblock": true, 00:13:08.403 "num_base_bdevs": 4, 00:13:08.403 "num_base_bdevs_discovered": 4, 00:13:08.403 "num_base_bdevs_operational": 4, 00:13:08.403 "base_bdevs_list": [ 00:13:08.403 { 00:13:08.403 "name": "pt1", 00:13:08.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.403 "is_configured": true, 00:13:08.403 "data_offset": 2048, 00:13:08.403 "data_size": 63488 00:13:08.403 }, 00:13:08.403 { 00:13:08.403 "name": "pt2", 00:13:08.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.403 "is_configured": true, 00:13:08.403 "data_offset": 2048, 00:13:08.403 "data_size": 63488 00:13:08.403 }, 00:13:08.403 { 00:13:08.403 "name": "pt3", 00:13:08.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.403 "is_configured": true, 00:13:08.403 "data_offset": 2048, 00:13:08.403 "data_size": 63488 00:13:08.403 }, 00:13:08.403 { 00:13:08.403 "name": "pt4", 00:13:08.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:08.403 "is_configured": true, 00:13:08.403 "data_offset": 2048, 00:13:08.403 "data_size": 63488 00:13:08.403 } 00:13:08.403 ] 00:13:08.403 }' 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.403 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.975 [2024-10-01 13:47:18.926884] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.975 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.975 "name": "raid_bdev1", 00:13:08.975 "aliases": [ 00:13:08.975 "4e2610aa-ded9-4dd7-85d8-83780332c3e2" 00:13:08.975 ], 00:13:08.975 "product_name": "Raid Volume", 00:13:08.975 "block_size": 512, 00:13:08.975 "num_blocks": 253952, 00:13:08.976 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:08.976 "assigned_rate_limits": { 00:13:08.976 "rw_ios_per_sec": 0, 00:13:08.976 "rw_mbytes_per_sec": 0, 00:13:08.976 "r_mbytes_per_sec": 0, 00:13:08.976 "w_mbytes_per_sec": 0 00:13:08.976 }, 00:13:08.976 "claimed": false, 00:13:08.976 "zoned": false, 00:13:08.976 "supported_io_types": { 00:13:08.976 "read": true, 00:13:08.976 "write": true, 00:13:08.976 "unmap": true, 00:13:08.976 "flush": true, 00:13:08.976 "reset": true, 00:13:08.976 "nvme_admin": false, 00:13:08.976 "nvme_io": false, 00:13:08.976 "nvme_io_md": false, 00:13:08.976 "write_zeroes": true, 00:13:08.976 "zcopy": false, 00:13:08.976 "get_zone_info": false, 00:13:08.976 "zone_management": false, 00:13:08.976 "zone_append": false, 00:13:08.976 "compare": false, 00:13:08.976 "compare_and_write": false, 00:13:08.976 "abort": false, 00:13:08.976 "seek_hole": false, 00:13:08.976 "seek_data": false, 00:13:08.976 "copy": false, 00:13:08.976 "nvme_iov_md": false 00:13:08.976 }, 00:13:08.976 "memory_domains": [ 00:13:08.976 { 00:13:08.976 "dma_device_id": "system", 00:13:08.976 "dma_device_type": 1 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.976 "dma_device_type": 2 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "dma_device_id": "system", 00:13:08.976 "dma_device_type": 1 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.976 "dma_device_type": 2 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "dma_device_id": "system", 00:13:08.976 "dma_device_type": 1 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.976 "dma_device_type": 2 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "dma_device_id": "system", 00:13:08.976 "dma_device_type": 1 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.976 "dma_device_type": 2 00:13:08.976 } 00:13:08.976 ], 00:13:08.976 "driver_specific": { 00:13:08.976 "raid": { 00:13:08.976 "uuid": "4e2610aa-ded9-4dd7-85d8-83780332c3e2", 00:13:08.976 "strip_size_kb": 64, 00:13:08.976 "state": "online", 00:13:08.976 "raid_level": "raid0", 00:13:08.976 "superblock": true, 00:13:08.976 "num_base_bdevs": 4, 00:13:08.976 "num_base_bdevs_discovered": 4, 00:13:08.976 "num_base_bdevs_operational": 4, 00:13:08.976 "base_bdevs_list": [ 00:13:08.976 { 00:13:08.976 "name": "pt1", 00:13:08.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.976 "is_configured": true, 00:13:08.976 "data_offset": 2048, 00:13:08.976 "data_size": 63488 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "name": "pt2", 00:13:08.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.976 "is_configured": true, 00:13:08.976 "data_offset": 2048, 00:13:08.976 "data_size": 63488 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "name": "pt3", 00:13:08.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.976 "is_configured": true, 00:13:08.976 "data_offset": 2048, 00:13:08.976 "data_size": 63488 00:13:08.976 }, 00:13:08.976 { 00:13:08.976 "name": "pt4", 00:13:08.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:08.976 "is_configured": true, 00:13:08.976 "data_offset": 2048, 00:13:08.976 "data_size": 63488 00:13:08.976 } 00:13:08.976 ] 00:13:08.976 } 00:13:08.976 } 00:13:08.976 }' 00:13:08.976 13:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:08.976 pt2 00:13:08.976 pt3 00:13:08.976 pt4' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:08.976 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.977 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.977 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:09.237 [2024-10-01 13:47:19.258833] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4e2610aa-ded9-4dd7-85d8-83780332c3e2 '!=' 4e2610aa-ded9-4dd7-85d8-83780332c3e2 ']' 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70613 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70613 ']' 00:13:09.237 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70613 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70613 00:13:09.238 killing process with pid 70613 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70613' 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70613 00:13:09.238 [2024-10-01 13:47:19.335984] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.238 [2024-10-01 13:47:19.336080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.238 13:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70613 00:13:09.238 [2024-10-01 13:47:19.336154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.238 [2024-10-01 13:47:19.336165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:09.807 [2024-10-01 13:47:19.731931] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.185 13:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:11.185 00:13:11.185 real 0m5.595s 00:13:11.185 user 0m7.878s 00:13:11.185 sys 0m1.063s 00:13:11.185 13:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.185 ************************************ 00:13:11.185 END TEST raid_superblock_test 00:13:11.185 ************************************ 00:13:11.185 13:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.185 13:47:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:11.185 13:47:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:11.185 13:47:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.185 13:47:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.185 ************************************ 00:13:11.185 START TEST raid_read_error_test 00:13:11.185 ************************************ 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hzlpKawmLD 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70874 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70874 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 70874 ']' 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.185 13:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.185 [2024-10-01 13:47:21.214882] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:11.185 [2024-10-01 13:47:21.215068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70874 ] 00:13:11.443 [2024-10-01 13:47:21.409639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.444 [2024-10-01 13:47:21.624476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.702 [2024-10-01 13:47:21.808970] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.702 [2024-10-01 13:47:21.809040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.962 BaseBdev1_malloc 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.962 true 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.962 [2024-10-01 13:47:22.100676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:11.962 [2024-10-01 13:47:22.100736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.962 [2024-10-01 13:47:22.100757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:11.962 [2024-10-01 13:47:22.100771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.962 [2024-10-01 13:47:22.103164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.962 [2024-10-01 13:47:22.103210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.962 BaseBdev1 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.962 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 BaseBdev2_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 true 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 [2024-10-01 13:47:22.181911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:12.222 [2024-10-01 13:47:22.181975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.222 [2024-10-01 13:47:22.181996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:12.222 [2024-10-01 13:47:22.182010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.222 [2024-10-01 13:47:22.184515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.222 [2024-10-01 13:47:22.184661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:12.222 BaseBdev2 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 BaseBdev3_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 true 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 [2024-10-01 13:47:22.249146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:12.222 [2024-10-01 13:47:22.249202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.222 [2024-10-01 13:47:22.249221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:12.222 [2024-10-01 13:47:22.249235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.222 [2024-10-01 13:47:22.251594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.222 [2024-10-01 13:47:22.251637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:12.222 BaseBdev3 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 BaseBdev4_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 true 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 [2024-10-01 13:47:22.316739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:12.222 [2024-10-01 13:47:22.316808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.222 [2024-10-01 13:47:22.316832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:12.222 [2024-10-01 13:47:22.316846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.222 [2024-10-01 13:47:22.319266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.222 [2024-10-01 13:47:22.319313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:12.222 BaseBdev4 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 [2024-10-01 13:47:22.328800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.222 [2024-10-01 13:47:22.330930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.222 [2024-10-01 13:47:22.331015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.222 [2024-10-01 13:47:22.331077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:12.222 [2024-10-01 13:47:22.331308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:12.222 [2024-10-01 13:47:22.331326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:12.222 [2024-10-01 13:47:22.331648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:12.222 [2024-10-01 13:47:22.331818] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:12.222 [2024-10-01 13:47:22.331829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:12.222 [2024-10-01 13:47:22.332021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.223 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.223 "name": "raid_bdev1", 00:13:12.223 "uuid": "d35fb12a-6e84-4ba3-9fdc-e389e64c76d6", 00:13:12.223 "strip_size_kb": 64, 00:13:12.223 "state": "online", 00:13:12.223 "raid_level": "raid0", 00:13:12.223 "superblock": true, 00:13:12.223 "num_base_bdevs": 4, 00:13:12.223 "num_base_bdevs_discovered": 4, 00:13:12.223 "num_base_bdevs_operational": 4, 00:13:12.223 "base_bdevs_list": [ 00:13:12.223 { 00:13:12.223 "name": "BaseBdev1", 00:13:12.223 "uuid": "abccd1d8-2e91-5619-9de9-e189f39781b3", 00:13:12.223 "is_configured": true, 00:13:12.223 "data_offset": 2048, 00:13:12.223 "data_size": 63488 00:13:12.223 }, 00:13:12.223 { 00:13:12.223 "name": "BaseBdev2", 00:13:12.223 "uuid": "e5a25fd2-6e08-5a00-a02e-f85d717d545b", 00:13:12.223 "is_configured": true, 00:13:12.223 "data_offset": 2048, 00:13:12.223 "data_size": 63488 00:13:12.223 }, 00:13:12.223 { 00:13:12.223 "name": "BaseBdev3", 00:13:12.223 "uuid": "e2d9f2fa-4291-5825-b8f6-8709c22f879f", 00:13:12.223 "is_configured": true, 00:13:12.223 "data_offset": 2048, 00:13:12.223 "data_size": 63488 00:13:12.223 }, 00:13:12.223 { 00:13:12.223 "name": "BaseBdev4", 00:13:12.223 "uuid": "2eee0033-6f52-53ba-950f-9cfd762a9e4d", 00:13:12.223 "is_configured": true, 00:13:12.223 "data_offset": 2048, 00:13:12.223 "data_size": 63488 00:13:12.223 } 00:13:12.223 ] 00:13:12.223 }' 00:13:12.223 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.223 13:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.841 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:12.841 13:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:12.841 [2024-10-01 13:47:22.833485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.778 "name": "raid_bdev1", 00:13:13.778 "uuid": "d35fb12a-6e84-4ba3-9fdc-e389e64c76d6", 00:13:13.778 "strip_size_kb": 64, 00:13:13.778 "state": "online", 00:13:13.778 "raid_level": "raid0", 00:13:13.778 "superblock": true, 00:13:13.778 "num_base_bdevs": 4, 00:13:13.778 "num_base_bdevs_discovered": 4, 00:13:13.778 "num_base_bdevs_operational": 4, 00:13:13.778 "base_bdevs_list": [ 00:13:13.778 { 00:13:13.778 "name": "BaseBdev1", 00:13:13.778 "uuid": "abccd1d8-2e91-5619-9de9-e189f39781b3", 00:13:13.778 "is_configured": true, 00:13:13.778 "data_offset": 2048, 00:13:13.778 "data_size": 63488 00:13:13.778 }, 00:13:13.778 { 00:13:13.778 "name": "BaseBdev2", 00:13:13.778 "uuid": "e5a25fd2-6e08-5a00-a02e-f85d717d545b", 00:13:13.778 "is_configured": true, 00:13:13.778 "data_offset": 2048, 00:13:13.778 "data_size": 63488 00:13:13.778 }, 00:13:13.778 { 00:13:13.778 "name": "BaseBdev3", 00:13:13.778 "uuid": "e2d9f2fa-4291-5825-b8f6-8709c22f879f", 00:13:13.778 "is_configured": true, 00:13:13.778 "data_offset": 2048, 00:13:13.778 "data_size": 63488 00:13:13.778 }, 00:13:13.778 { 00:13:13.778 "name": "BaseBdev4", 00:13:13.778 "uuid": "2eee0033-6f52-53ba-950f-9cfd762a9e4d", 00:13:13.778 "is_configured": true, 00:13:13.778 "data_offset": 2048, 00:13:13.778 "data_size": 63488 00:13:13.778 } 00:13:13.778 ] 00:13:13.778 }' 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.778 13:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 [2024-10-01 13:47:24.184666] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.038 [2024-10-01 13:47:24.184708] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.038 [2024-10-01 13:47:24.187537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.038 [2024-10-01 13:47:24.187611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.038 [2024-10-01 13:47:24.187656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.038 [2024-10-01 13:47:24.187671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:14.038 { 00:13:14.038 "results": [ 00:13:14.038 { 00:13:14.038 "job": "raid_bdev1", 00:13:14.038 "core_mask": "0x1", 00:13:14.038 "workload": "randrw", 00:13:14.038 "percentage": 50, 00:13:14.038 "status": "finished", 00:13:14.038 "queue_depth": 1, 00:13:14.038 "io_size": 131072, 00:13:14.038 "runtime": 1.351253, 00:13:14.038 "iops": 16073.229809665547, 00:13:14.038 "mibps": 2009.1537262081933, 00:13:14.038 "io_failed": 1, 00:13:14.038 "io_timeout": 0, 00:13:14.038 "avg_latency_us": 86.28661415459258, 00:13:14.038 "min_latency_us": 26.936546184738955, 00:13:14.038 "max_latency_us": 1408.1028112449799 00:13:14.038 } 00:13:14.038 ], 00:13:14.038 "core_count": 1 00:13:14.038 } 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70874 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 70874 ']' 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 70874 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70874 00:13:14.038 killing process with pid 70874 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70874' 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 70874 00:13:14.038 [2024-10-01 13:47:24.227846] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.038 13:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 70874 00:13:14.606 [2024-10-01 13:47:24.554323] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hzlpKawmLD 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:15.990 00:13:15.990 real 0m4.858s 00:13:15.990 user 0m5.602s 00:13:15.990 sys 0m0.657s 00:13:15.990 ************************************ 00:13:15.990 END TEST raid_read_error_test 00:13:15.990 ************************************ 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.990 13:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.990 13:47:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:15.990 13:47:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:15.990 13:47:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.990 13:47:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.990 ************************************ 00:13:15.990 START TEST raid_write_error_test 00:13:15.990 ************************************ 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rP3IuslUGz 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71027 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71027 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71027 ']' 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.990 13:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.990 [2024-10-01 13:47:26.160794] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:15.990 [2024-10-01 13:47:26.160930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71027 ] 00:13:16.249 [2024-10-01 13:47:26.321584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.506 [2024-10-01 13:47:26.547894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.765 [2024-10-01 13:47:26.779891] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.765 [2024-10-01 13:47:26.779964] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.024 BaseBdev1_malloc 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.024 true 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.024 [2024-10-01 13:47:27.163003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:17.024 [2024-10-01 13:47:27.163064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.024 [2024-10-01 13:47:27.163086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:17.024 [2024-10-01 13:47:27.163100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.024 [2024-10-01 13:47:27.165566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.024 [2024-10-01 13:47:27.165609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.024 BaseBdev1 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.024 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.282 BaseBdev2_malloc 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.282 true 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.282 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.282 [2024-10-01 13:47:27.240010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:17.282 [2024-10-01 13:47:27.240215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.282 [2024-10-01 13:47:27.240247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:17.282 [2024-10-01 13:47:27.240262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.283 [2024-10-01 13:47:27.242782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.283 [2024-10-01 13:47:27.242826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.283 BaseBdev2 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 BaseBdev3_malloc 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 true 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 [2024-10-01 13:47:27.310008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:17.283 [2024-10-01 13:47:27.310208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.283 [2024-10-01 13:47:27.310268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:17.283 [2024-10-01 13:47:27.310376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.283 [2024-10-01 13:47:27.313017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.283 [2024-10-01 13:47:27.313178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:17.283 BaseBdev3 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 BaseBdev4_malloc 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 true 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 [2024-10-01 13:47:27.378057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:17.283 [2024-10-01 13:47:27.378117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.283 [2024-10-01 13:47:27.378138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:17.283 [2024-10-01 13:47:27.378154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.283 [2024-10-01 13:47:27.380587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.283 [2024-10-01 13:47:27.380634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:17.283 BaseBdev4 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 [2024-10-01 13:47:27.390114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.283 [2024-10-01 13:47:27.392234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.283 [2024-10-01 13:47:27.392458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.283 [2024-10-01 13:47:27.392534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.283 [2024-10-01 13:47:27.392751] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:17.283 [2024-10-01 13:47:27.392767] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:17.283 [2024-10-01 13:47:27.393037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:17.283 [2024-10-01 13:47:27.393190] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:17.283 [2024-10-01 13:47:27.393200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:17.283 [2024-10-01 13:47:27.393351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.283 "name": "raid_bdev1", 00:13:17.283 "uuid": "e75f2cfb-fb3c-4613-b6cd-a08fa049cc6d", 00:13:17.283 "strip_size_kb": 64, 00:13:17.283 "state": "online", 00:13:17.283 "raid_level": "raid0", 00:13:17.283 "superblock": true, 00:13:17.283 "num_base_bdevs": 4, 00:13:17.283 "num_base_bdevs_discovered": 4, 00:13:17.283 "num_base_bdevs_operational": 4, 00:13:17.283 "base_bdevs_list": [ 00:13:17.283 { 00:13:17.283 "name": "BaseBdev1", 00:13:17.283 "uuid": "6c00d9a7-b665-59eb-8bd5-5bc14e8aca47", 00:13:17.283 "is_configured": true, 00:13:17.283 "data_offset": 2048, 00:13:17.283 "data_size": 63488 00:13:17.283 }, 00:13:17.283 { 00:13:17.283 "name": "BaseBdev2", 00:13:17.283 "uuid": "141bd34f-6733-5a53-b0f1-8e9cf7ee9edc", 00:13:17.283 "is_configured": true, 00:13:17.283 "data_offset": 2048, 00:13:17.283 "data_size": 63488 00:13:17.283 }, 00:13:17.283 { 00:13:17.283 "name": "BaseBdev3", 00:13:17.283 "uuid": "dad407b8-1a82-5acc-adf3-bbfb27b4c56a", 00:13:17.283 "is_configured": true, 00:13:17.283 "data_offset": 2048, 00:13:17.283 "data_size": 63488 00:13:17.283 }, 00:13:17.283 { 00:13:17.283 "name": "BaseBdev4", 00:13:17.283 "uuid": "e0cf0be0-439f-5e28-9564-454bfbb9f6e8", 00:13:17.283 "is_configured": true, 00:13:17.283 "data_offset": 2048, 00:13:17.283 "data_size": 63488 00:13:17.283 } 00:13:17.283 ] 00:13:17.283 }' 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.283 13:47:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.893 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:17.893 13:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:17.893 [2024-10-01 13:47:27.902768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.831 "name": "raid_bdev1", 00:13:18.831 "uuid": "e75f2cfb-fb3c-4613-b6cd-a08fa049cc6d", 00:13:18.831 "strip_size_kb": 64, 00:13:18.831 "state": "online", 00:13:18.831 "raid_level": "raid0", 00:13:18.831 "superblock": true, 00:13:18.831 "num_base_bdevs": 4, 00:13:18.831 "num_base_bdevs_discovered": 4, 00:13:18.831 "num_base_bdevs_operational": 4, 00:13:18.831 "base_bdevs_list": [ 00:13:18.831 { 00:13:18.831 "name": "BaseBdev1", 00:13:18.831 "uuid": "6c00d9a7-b665-59eb-8bd5-5bc14e8aca47", 00:13:18.831 "is_configured": true, 00:13:18.831 "data_offset": 2048, 00:13:18.831 "data_size": 63488 00:13:18.831 }, 00:13:18.831 { 00:13:18.831 "name": "BaseBdev2", 00:13:18.831 "uuid": "141bd34f-6733-5a53-b0f1-8e9cf7ee9edc", 00:13:18.831 "is_configured": true, 00:13:18.831 "data_offset": 2048, 00:13:18.831 "data_size": 63488 00:13:18.831 }, 00:13:18.831 { 00:13:18.831 "name": "BaseBdev3", 00:13:18.831 "uuid": "dad407b8-1a82-5acc-adf3-bbfb27b4c56a", 00:13:18.831 "is_configured": true, 00:13:18.831 "data_offset": 2048, 00:13:18.831 "data_size": 63488 00:13:18.831 }, 00:13:18.831 { 00:13:18.831 "name": "BaseBdev4", 00:13:18.831 "uuid": "e0cf0be0-439f-5e28-9564-454bfbb9f6e8", 00:13:18.831 "is_configured": true, 00:13:18.831 "data_offset": 2048, 00:13:18.831 "data_size": 63488 00:13:18.831 } 00:13:18.831 ] 00:13:18.831 }' 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.831 13:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.089 13:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.089 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.089 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.089 [2024-10-01 13:47:29.278636] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.089 [2024-10-01 13:47:29.278673] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.348 [2024-10-01 13:47:29.281375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.348 [2024-10-01 13:47:29.281559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.348 [2024-10-01 13:47:29.281642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.348 [2024-10-01 13:47:29.281749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.348 { 00:13:19.348 "results": [ 00:13:19.348 { 00:13:19.348 "job": "raid_bdev1", 00:13:19.348 "core_mask": "0x1", 00:13:19.348 "workload": "randrw", 00:13:19.348 "percentage": 50, 00:13:19.348 "status": "finished", 00:13:19.348 "queue_depth": 1, 00:13:19.348 "io_size": 131072, 00:13:19.348 "runtime": 1.375588, 00:13:19.348 "iops": 15485.741370235855, 00:13:19.348 "mibps": 1935.717671279482, 00:13:19.348 "io_failed": 1, 00:13:19.348 "io_timeout": 0, 00:13:19.348 "avg_latency_us": 89.51312707997648, 00:13:19.348 "min_latency_us": 26.936546184738955, 00:13:19.348 "max_latency_us": 1434.4224899598394 00:13:19.348 } 00:13:19.348 ], 00:13:19.348 "core_count": 1 00:13:19.348 } 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71027 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71027 ']' 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71027 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71027 00:13:19.348 killing process with pid 71027 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71027' 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71027 00:13:19.348 [2024-10-01 13:47:29.333495] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.348 13:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71027 00:13:19.608 [2024-10-01 13:47:29.657442] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.985 13:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rP3IuslUGz 00:13:20.985 13:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:20.985 13:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:20.985 13:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:13:20.985 13:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:20.985 13:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.985 13:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:20.985 13:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:13:20.985 00:13:20.985 real 0m4.981s 00:13:20.985 user 0m5.869s 00:13:20.985 sys 0m0.677s 00:13:20.985 13:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.985 ************************************ 00:13:20.985 END TEST raid_write_error_test 00:13:20.985 ************************************ 00:13:20.985 13:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.985 13:47:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:20.985 13:47:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:20.985 13:47:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:20.985 13:47:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.985 13:47:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.985 ************************************ 00:13:20.985 START TEST raid_state_function_test 00:13:20.985 ************************************ 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71171 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:20.985 Process raid pid: 71171 00:13:20.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71171' 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71171 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71171 ']' 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.985 13:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.245 [2024-10-01 13:47:31.198257] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:21.245 [2024-10-01 13:47:31.198585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.245 [2024-10-01 13:47:31.373592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.504 [2024-10-01 13:47:31.595777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.764 [2024-10-01 13:47:31.803121] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.764 [2024-10-01 13:47:31.803156] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 [2024-10-01 13:47:32.036857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.024 [2024-10-01 13:47:32.036910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.024 [2024-10-01 13:47:32.036925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.024 [2024-10-01 13:47:32.036939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.024 [2024-10-01 13:47:32.036946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.024 [2024-10-01 13:47:32.036960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.024 [2024-10-01 13:47:32.036968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.024 [2024-10-01 13:47:32.036980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.024 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.024 "name": "Existed_Raid", 00:13:22.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.024 "strip_size_kb": 64, 00:13:22.024 "state": "configuring", 00:13:22.024 "raid_level": "concat", 00:13:22.024 "superblock": false, 00:13:22.024 "num_base_bdevs": 4, 00:13:22.024 "num_base_bdevs_discovered": 0, 00:13:22.024 "num_base_bdevs_operational": 4, 00:13:22.024 "base_bdevs_list": [ 00:13:22.024 { 00:13:22.025 "name": "BaseBdev1", 00:13:22.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.025 "is_configured": false, 00:13:22.025 "data_offset": 0, 00:13:22.025 "data_size": 0 00:13:22.025 }, 00:13:22.025 { 00:13:22.025 "name": "BaseBdev2", 00:13:22.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.025 "is_configured": false, 00:13:22.025 "data_offset": 0, 00:13:22.025 "data_size": 0 00:13:22.025 }, 00:13:22.025 { 00:13:22.025 "name": "BaseBdev3", 00:13:22.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.025 "is_configured": false, 00:13:22.025 "data_offset": 0, 00:13:22.025 "data_size": 0 00:13:22.025 }, 00:13:22.025 { 00:13:22.025 "name": "BaseBdev4", 00:13:22.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.025 "is_configured": false, 00:13:22.025 "data_offset": 0, 00:13:22.025 "data_size": 0 00:13:22.025 } 00:13:22.025 ] 00:13:22.025 }' 00:13:22.025 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.025 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.284 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 [2024-10-01 13:47:32.432251] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.285 [2024-10-01 13:47:32.432299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 [2024-10-01 13:47:32.440246] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.285 [2024-10-01 13:47:32.440421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.285 [2024-10-01 13:47:32.440443] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.285 [2024-10-01 13:47:32.440458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.285 [2024-10-01 13:47:32.440466] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.285 [2024-10-01 13:47:32.440479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.285 [2024-10-01 13:47:32.440487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.285 [2024-10-01 13:47:32.440500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.285 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.543 [2024-10-01 13:47:32.507148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.544 BaseBdev1 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.544 [ 00:13:22.544 { 00:13:22.544 "name": "BaseBdev1", 00:13:22.544 "aliases": [ 00:13:22.544 "5798d819-2bbf-4300-b619-a2d34621cb02" 00:13:22.544 ], 00:13:22.544 "product_name": "Malloc disk", 00:13:22.544 "block_size": 512, 00:13:22.544 "num_blocks": 65536, 00:13:22.544 "uuid": "5798d819-2bbf-4300-b619-a2d34621cb02", 00:13:22.544 "assigned_rate_limits": { 00:13:22.544 "rw_ios_per_sec": 0, 00:13:22.544 "rw_mbytes_per_sec": 0, 00:13:22.544 "r_mbytes_per_sec": 0, 00:13:22.544 "w_mbytes_per_sec": 0 00:13:22.544 }, 00:13:22.544 "claimed": true, 00:13:22.544 "claim_type": "exclusive_write", 00:13:22.544 "zoned": false, 00:13:22.544 "supported_io_types": { 00:13:22.544 "read": true, 00:13:22.544 "write": true, 00:13:22.544 "unmap": true, 00:13:22.544 "flush": true, 00:13:22.544 "reset": true, 00:13:22.544 "nvme_admin": false, 00:13:22.544 "nvme_io": false, 00:13:22.544 "nvme_io_md": false, 00:13:22.544 "write_zeroes": true, 00:13:22.544 "zcopy": true, 00:13:22.544 "get_zone_info": false, 00:13:22.544 "zone_management": false, 00:13:22.544 "zone_append": false, 00:13:22.544 "compare": false, 00:13:22.544 "compare_and_write": false, 00:13:22.544 "abort": true, 00:13:22.544 "seek_hole": false, 00:13:22.544 "seek_data": false, 00:13:22.544 "copy": true, 00:13:22.544 "nvme_iov_md": false 00:13:22.544 }, 00:13:22.544 "memory_domains": [ 00:13:22.544 { 00:13:22.544 "dma_device_id": "system", 00:13:22.544 "dma_device_type": 1 00:13:22.544 }, 00:13:22.544 { 00:13:22.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.544 "dma_device_type": 2 00:13:22.544 } 00:13:22.544 ], 00:13:22.544 "driver_specific": {} 00:13:22.544 } 00:13:22.544 ] 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.544 "name": "Existed_Raid", 00:13:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.544 "strip_size_kb": 64, 00:13:22.544 "state": "configuring", 00:13:22.544 "raid_level": "concat", 00:13:22.544 "superblock": false, 00:13:22.544 "num_base_bdevs": 4, 00:13:22.544 "num_base_bdevs_discovered": 1, 00:13:22.544 "num_base_bdevs_operational": 4, 00:13:22.544 "base_bdevs_list": [ 00:13:22.544 { 00:13:22.544 "name": "BaseBdev1", 00:13:22.544 "uuid": "5798d819-2bbf-4300-b619-a2d34621cb02", 00:13:22.544 "is_configured": true, 00:13:22.544 "data_offset": 0, 00:13:22.544 "data_size": 65536 00:13:22.544 }, 00:13:22.544 { 00:13:22.544 "name": "BaseBdev2", 00:13:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.544 "is_configured": false, 00:13:22.544 "data_offset": 0, 00:13:22.544 "data_size": 0 00:13:22.544 }, 00:13:22.544 { 00:13:22.544 "name": "BaseBdev3", 00:13:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.544 "is_configured": false, 00:13:22.544 "data_offset": 0, 00:13:22.544 "data_size": 0 00:13:22.544 }, 00:13:22.544 { 00:13:22.544 "name": "BaseBdev4", 00:13:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.544 "is_configured": false, 00:13:22.544 "data_offset": 0, 00:13:22.544 "data_size": 0 00:13:22.544 } 00:13:22.544 ] 00:13:22.544 }' 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.544 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.803 [2024-10-01 13:47:32.942595] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.803 [2024-10-01 13:47:32.942782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.803 [2024-10-01 13:47:32.954617] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.803 [2024-10-01 13:47:32.956819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.803 [2024-10-01 13:47:32.956871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.803 [2024-10-01 13:47:32.956882] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.803 [2024-10-01 13:47:32.956898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.803 [2024-10-01 13:47:32.956906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.803 [2024-10-01 13:47:32.956919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.803 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.062 13:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.062 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.062 "name": "Existed_Raid", 00:13:23.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.062 "strip_size_kb": 64, 00:13:23.062 "state": "configuring", 00:13:23.062 "raid_level": "concat", 00:13:23.062 "superblock": false, 00:13:23.062 "num_base_bdevs": 4, 00:13:23.062 "num_base_bdevs_discovered": 1, 00:13:23.062 "num_base_bdevs_operational": 4, 00:13:23.062 "base_bdevs_list": [ 00:13:23.062 { 00:13:23.062 "name": "BaseBdev1", 00:13:23.062 "uuid": "5798d819-2bbf-4300-b619-a2d34621cb02", 00:13:23.062 "is_configured": true, 00:13:23.062 "data_offset": 0, 00:13:23.062 "data_size": 65536 00:13:23.062 }, 00:13:23.062 { 00:13:23.062 "name": "BaseBdev2", 00:13:23.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.062 "is_configured": false, 00:13:23.062 "data_offset": 0, 00:13:23.062 "data_size": 0 00:13:23.062 }, 00:13:23.062 { 00:13:23.062 "name": "BaseBdev3", 00:13:23.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.062 "is_configured": false, 00:13:23.062 "data_offset": 0, 00:13:23.062 "data_size": 0 00:13:23.062 }, 00:13:23.062 { 00:13:23.062 "name": "BaseBdev4", 00:13:23.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.062 "is_configured": false, 00:13:23.062 "data_offset": 0, 00:13:23.062 "data_size": 0 00:13:23.062 } 00:13:23.062 ] 00:13:23.062 }' 00:13:23.062 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.062 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.321 [2024-10-01 13:47:33.422230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.321 BaseBdev2 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.321 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.322 [ 00:13:23.322 { 00:13:23.322 "name": "BaseBdev2", 00:13:23.322 "aliases": [ 00:13:23.322 "45c3cf75-e7a4-4657-abf6-3b8bdf6ed6df" 00:13:23.322 ], 00:13:23.322 "product_name": "Malloc disk", 00:13:23.322 "block_size": 512, 00:13:23.322 "num_blocks": 65536, 00:13:23.322 "uuid": "45c3cf75-e7a4-4657-abf6-3b8bdf6ed6df", 00:13:23.322 "assigned_rate_limits": { 00:13:23.322 "rw_ios_per_sec": 0, 00:13:23.322 "rw_mbytes_per_sec": 0, 00:13:23.322 "r_mbytes_per_sec": 0, 00:13:23.322 "w_mbytes_per_sec": 0 00:13:23.322 }, 00:13:23.322 "claimed": true, 00:13:23.322 "claim_type": "exclusive_write", 00:13:23.322 "zoned": false, 00:13:23.322 "supported_io_types": { 00:13:23.322 "read": true, 00:13:23.322 "write": true, 00:13:23.322 "unmap": true, 00:13:23.322 "flush": true, 00:13:23.322 "reset": true, 00:13:23.322 "nvme_admin": false, 00:13:23.322 "nvme_io": false, 00:13:23.322 "nvme_io_md": false, 00:13:23.322 "write_zeroes": true, 00:13:23.322 "zcopy": true, 00:13:23.322 "get_zone_info": false, 00:13:23.322 "zone_management": false, 00:13:23.322 "zone_append": false, 00:13:23.322 "compare": false, 00:13:23.322 "compare_and_write": false, 00:13:23.322 "abort": true, 00:13:23.322 "seek_hole": false, 00:13:23.322 "seek_data": false, 00:13:23.322 "copy": true, 00:13:23.322 "nvme_iov_md": false 00:13:23.322 }, 00:13:23.322 "memory_domains": [ 00:13:23.322 { 00:13:23.322 "dma_device_id": "system", 00:13:23.322 "dma_device_type": 1 00:13:23.322 }, 00:13:23.322 { 00:13:23.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.322 "dma_device_type": 2 00:13:23.322 } 00:13:23.322 ], 00:13:23.322 "driver_specific": {} 00:13:23.322 } 00:13:23.322 ] 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.322 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.582 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.582 "name": "Existed_Raid", 00:13:23.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.582 "strip_size_kb": 64, 00:13:23.582 "state": "configuring", 00:13:23.582 "raid_level": "concat", 00:13:23.582 "superblock": false, 00:13:23.582 "num_base_bdevs": 4, 00:13:23.582 "num_base_bdevs_discovered": 2, 00:13:23.582 "num_base_bdevs_operational": 4, 00:13:23.582 "base_bdevs_list": [ 00:13:23.582 { 00:13:23.582 "name": "BaseBdev1", 00:13:23.582 "uuid": "5798d819-2bbf-4300-b619-a2d34621cb02", 00:13:23.582 "is_configured": true, 00:13:23.582 "data_offset": 0, 00:13:23.582 "data_size": 65536 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "name": "BaseBdev2", 00:13:23.582 "uuid": "45c3cf75-e7a4-4657-abf6-3b8bdf6ed6df", 00:13:23.582 "is_configured": true, 00:13:23.582 "data_offset": 0, 00:13:23.582 "data_size": 65536 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "name": "BaseBdev3", 00:13:23.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.582 "is_configured": false, 00:13:23.582 "data_offset": 0, 00:13:23.582 "data_size": 0 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "name": "BaseBdev4", 00:13:23.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.582 "is_configured": false, 00:13:23.582 "data_offset": 0, 00:13:23.582 "data_size": 0 00:13:23.582 } 00:13:23.582 ] 00:13:23.582 }' 00:13:23.582 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.582 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.841 [2024-10-01 13:47:33.938519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.841 BaseBdev3 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:23.841 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.842 [ 00:13:23.842 { 00:13:23.842 "name": "BaseBdev3", 00:13:23.842 "aliases": [ 00:13:23.842 "520c98ef-34c1-45de-a9a8-56fbb96e6360" 00:13:23.842 ], 00:13:23.842 "product_name": "Malloc disk", 00:13:23.842 "block_size": 512, 00:13:23.842 "num_blocks": 65536, 00:13:23.842 "uuid": "520c98ef-34c1-45de-a9a8-56fbb96e6360", 00:13:23.842 "assigned_rate_limits": { 00:13:23.842 "rw_ios_per_sec": 0, 00:13:23.842 "rw_mbytes_per_sec": 0, 00:13:23.842 "r_mbytes_per_sec": 0, 00:13:23.842 "w_mbytes_per_sec": 0 00:13:23.842 }, 00:13:23.842 "claimed": true, 00:13:23.842 "claim_type": "exclusive_write", 00:13:23.842 "zoned": false, 00:13:23.842 "supported_io_types": { 00:13:23.842 "read": true, 00:13:23.842 "write": true, 00:13:23.842 "unmap": true, 00:13:23.842 "flush": true, 00:13:23.842 "reset": true, 00:13:23.842 "nvme_admin": false, 00:13:23.842 "nvme_io": false, 00:13:23.842 "nvme_io_md": false, 00:13:23.842 "write_zeroes": true, 00:13:23.842 "zcopy": true, 00:13:23.842 "get_zone_info": false, 00:13:23.842 "zone_management": false, 00:13:23.842 "zone_append": false, 00:13:23.842 "compare": false, 00:13:23.842 "compare_and_write": false, 00:13:23.842 "abort": true, 00:13:23.842 "seek_hole": false, 00:13:23.842 "seek_data": false, 00:13:23.842 "copy": true, 00:13:23.842 "nvme_iov_md": false 00:13:23.842 }, 00:13:23.842 "memory_domains": [ 00:13:23.842 { 00:13:23.842 "dma_device_id": "system", 00:13:23.842 "dma_device_type": 1 00:13:23.842 }, 00:13:23.842 { 00:13:23.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.842 "dma_device_type": 2 00:13:23.842 } 00:13:23.842 ], 00:13:23.842 "driver_specific": {} 00:13:23.842 } 00:13:23.842 ] 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.842 13:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.842 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.101 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.101 "name": "Existed_Raid", 00:13:24.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.101 "strip_size_kb": 64, 00:13:24.101 "state": "configuring", 00:13:24.101 "raid_level": "concat", 00:13:24.101 "superblock": false, 00:13:24.101 "num_base_bdevs": 4, 00:13:24.101 "num_base_bdevs_discovered": 3, 00:13:24.101 "num_base_bdevs_operational": 4, 00:13:24.101 "base_bdevs_list": [ 00:13:24.101 { 00:13:24.101 "name": "BaseBdev1", 00:13:24.101 "uuid": "5798d819-2bbf-4300-b619-a2d34621cb02", 00:13:24.101 "is_configured": true, 00:13:24.101 "data_offset": 0, 00:13:24.101 "data_size": 65536 00:13:24.101 }, 00:13:24.101 { 00:13:24.101 "name": "BaseBdev2", 00:13:24.101 "uuid": "45c3cf75-e7a4-4657-abf6-3b8bdf6ed6df", 00:13:24.101 "is_configured": true, 00:13:24.101 "data_offset": 0, 00:13:24.101 "data_size": 65536 00:13:24.101 }, 00:13:24.101 { 00:13:24.101 "name": "BaseBdev3", 00:13:24.101 "uuid": "520c98ef-34c1-45de-a9a8-56fbb96e6360", 00:13:24.101 "is_configured": true, 00:13:24.101 "data_offset": 0, 00:13:24.101 "data_size": 65536 00:13:24.101 }, 00:13:24.101 { 00:13:24.101 "name": "BaseBdev4", 00:13:24.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.101 "is_configured": false, 00:13:24.101 "data_offset": 0, 00:13:24.101 "data_size": 0 00:13:24.101 } 00:13:24.101 ] 00:13:24.101 }' 00:13:24.102 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.102 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 [2024-10-01 13:47:34.453577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.361 [2024-10-01 13:47:34.453645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:24.361 [2024-10-01 13:47:34.453655] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:24.361 [2024-10-01 13:47:34.453943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:24.361 [2024-10-01 13:47:34.454106] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:24.361 [2024-10-01 13:47:34.454119] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:24.361 [2024-10-01 13:47:34.454385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.361 BaseBdev4 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 [ 00:13:24.361 { 00:13:24.361 "name": "BaseBdev4", 00:13:24.361 "aliases": [ 00:13:24.361 "1530a339-2a26-4fdb-8b36-f4fa5ef985f5" 00:13:24.361 ], 00:13:24.361 "product_name": "Malloc disk", 00:13:24.361 "block_size": 512, 00:13:24.361 "num_blocks": 65536, 00:13:24.361 "uuid": "1530a339-2a26-4fdb-8b36-f4fa5ef985f5", 00:13:24.361 "assigned_rate_limits": { 00:13:24.361 "rw_ios_per_sec": 0, 00:13:24.361 "rw_mbytes_per_sec": 0, 00:13:24.361 "r_mbytes_per_sec": 0, 00:13:24.361 "w_mbytes_per_sec": 0 00:13:24.361 }, 00:13:24.361 "claimed": true, 00:13:24.361 "claim_type": "exclusive_write", 00:13:24.361 "zoned": false, 00:13:24.361 "supported_io_types": { 00:13:24.361 "read": true, 00:13:24.361 "write": true, 00:13:24.361 "unmap": true, 00:13:24.361 "flush": true, 00:13:24.361 "reset": true, 00:13:24.361 "nvme_admin": false, 00:13:24.361 "nvme_io": false, 00:13:24.361 "nvme_io_md": false, 00:13:24.361 "write_zeroes": true, 00:13:24.361 "zcopy": true, 00:13:24.361 "get_zone_info": false, 00:13:24.361 "zone_management": false, 00:13:24.361 "zone_append": false, 00:13:24.361 "compare": false, 00:13:24.361 "compare_and_write": false, 00:13:24.361 "abort": true, 00:13:24.361 "seek_hole": false, 00:13:24.361 "seek_data": false, 00:13:24.361 "copy": true, 00:13:24.361 "nvme_iov_md": false 00:13:24.361 }, 00:13:24.361 "memory_domains": [ 00:13:24.361 { 00:13:24.361 "dma_device_id": "system", 00:13:24.361 "dma_device_type": 1 00:13:24.361 }, 00:13:24.361 { 00:13:24.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.361 "dma_device_type": 2 00:13:24.361 } 00:13:24.361 ], 00:13:24.361 "driver_specific": {} 00:13:24.361 } 00:13:24.361 ] 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.361 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.621 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.621 "name": "Existed_Raid", 00:13:24.621 "uuid": "c1609f03-5686-4f60-84d0-c6270406c694", 00:13:24.621 "strip_size_kb": 64, 00:13:24.621 "state": "online", 00:13:24.621 "raid_level": "concat", 00:13:24.621 "superblock": false, 00:13:24.621 "num_base_bdevs": 4, 00:13:24.621 "num_base_bdevs_discovered": 4, 00:13:24.621 "num_base_bdevs_operational": 4, 00:13:24.621 "base_bdevs_list": [ 00:13:24.621 { 00:13:24.621 "name": "BaseBdev1", 00:13:24.621 "uuid": "5798d819-2bbf-4300-b619-a2d34621cb02", 00:13:24.621 "is_configured": true, 00:13:24.621 "data_offset": 0, 00:13:24.621 "data_size": 65536 00:13:24.621 }, 00:13:24.621 { 00:13:24.621 "name": "BaseBdev2", 00:13:24.621 "uuid": "45c3cf75-e7a4-4657-abf6-3b8bdf6ed6df", 00:13:24.621 "is_configured": true, 00:13:24.621 "data_offset": 0, 00:13:24.621 "data_size": 65536 00:13:24.621 }, 00:13:24.621 { 00:13:24.621 "name": "BaseBdev3", 00:13:24.621 "uuid": "520c98ef-34c1-45de-a9a8-56fbb96e6360", 00:13:24.621 "is_configured": true, 00:13:24.621 "data_offset": 0, 00:13:24.621 "data_size": 65536 00:13:24.621 }, 00:13:24.621 { 00:13:24.621 "name": "BaseBdev4", 00:13:24.621 "uuid": "1530a339-2a26-4fdb-8b36-f4fa5ef985f5", 00:13:24.621 "is_configured": true, 00:13:24.621 "data_offset": 0, 00:13:24.621 "data_size": 65536 00:13:24.621 } 00:13:24.621 ] 00:13:24.621 }' 00:13:24.621 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.621 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.881 [2024-10-01 13:47:34.877382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.881 "name": "Existed_Raid", 00:13:24.881 "aliases": [ 00:13:24.881 "c1609f03-5686-4f60-84d0-c6270406c694" 00:13:24.881 ], 00:13:24.881 "product_name": "Raid Volume", 00:13:24.881 "block_size": 512, 00:13:24.881 "num_blocks": 262144, 00:13:24.881 "uuid": "c1609f03-5686-4f60-84d0-c6270406c694", 00:13:24.881 "assigned_rate_limits": { 00:13:24.881 "rw_ios_per_sec": 0, 00:13:24.881 "rw_mbytes_per_sec": 0, 00:13:24.881 "r_mbytes_per_sec": 0, 00:13:24.881 "w_mbytes_per_sec": 0 00:13:24.881 }, 00:13:24.881 "claimed": false, 00:13:24.881 "zoned": false, 00:13:24.881 "supported_io_types": { 00:13:24.881 "read": true, 00:13:24.881 "write": true, 00:13:24.881 "unmap": true, 00:13:24.881 "flush": true, 00:13:24.881 "reset": true, 00:13:24.881 "nvme_admin": false, 00:13:24.881 "nvme_io": false, 00:13:24.881 "nvme_io_md": false, 00:13:24.881 "write_zeroes": true, 00:13:24.881 "zcopy": false, 00:13:24.881 "get_zone_info": false, 00:13:24.881 "zone_management": false, 00:13:24.881 "zone_append": false, 00:13:24.881 "compare": false, 00:13:24.881 "compare_and_write": false, 00:13:24.881 "abort": false, 00:13:24.881 "seek_hole": false, 00:13:24.881 "seek_data": false, 00:13:24.881 "copy": false, 00:13:24.881 "nvme_iov_md": false 00:13:24.881 }, 00:13:24.881 "memory_domains": [ 00:13:24.881 { 00:13:24.881 "dma_device_id": "system", 00:13:24.881 "dma_device_type": 1 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.881 "dma_device_type": 2 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "dma_device_id": "system", 00:13:24.881 "dma_device_type": 1 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.881 "dma_device_type": 2 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "dma_device_id": "system", 00:13:24.881 "dma_device_type": 1 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.881 "dma_device_type": 2 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "dma_device_id": "system", 00:13:24.881 "dma_device_type": 1 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.881 "dma_device_type": 2 00:13:24.881 } 00:13:24.881 ], 00:13:24.881 "driver_specific": { 00:13:24.881 "raid": { 00:13:24.881 "uuid": "c1609f03-5686-4f60-84d0-c6270406c694", 00:13:24.881 "strip_size_kb": 64, 00:13:24.881 "state": "online", 00:13:24.881 "raid_level": "concat", 00:13:24.881 "superblock": false, 00:13:24.881 "num_base_bdevs": 4, 00:13:24.881 "num_base_bdevs_discovered": 4, 00:13:24.881 "num_base_bdevs_operational": 4, 00:13:24.881 "base_bdevs_list": [ 00:13:24.881 { 00:13:24.881 "name": "BaseBdev1", 00:13:24.881 "uuid": "5798d819-2bbf-4300-b619-a2d34621cb02", 00:13:24.881 "is_configured": true, 00:13:24.881 "data_offset": 0, 00:13:24.881 "data_size": 65536 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "name": "BaseBdev2", 00:13:24.881 "uuid": "45c3cf75-e7a4-4657-abf6-3b8bdf6ed6df", 00:13:24.881 "is_configured": true, 00:13:24.881 "data_offset": 0, 00:13:24.881 "data_size": 65536 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "name": "BaseBdev3", 00:13:24.881 "uuid": "520c98ef-34c1-45de-a9a8-56fbb96e6360", 00:13:24.881 "is_configured": true, 00:13:24.881 "data_offset": 0, 00:13:24.881 "data_size": 65536 00:13:24.881 }, 00:13:24.881 { 00:13:24.881 "name": "BaseBdev4", 00:13:24.881 "uuid": "1530a339-2a26-4fdb-8b36-f4fa5ef985f5", 00:13:24.881 "is_configured": true, 00:13:24.881 "data_offset": 0, 00:13:24.881 "data_size": 65536 00:13:24.881 } 00:13:24.881 ] 00:13:24.881 } 00:13:24.881 } 00:13:24.881 }' 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:24.881 BaseBdev2 00:13:24.881 BaseBdev3 00:13:24.881 BaseBdev4' 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.881 13:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.881 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.141 [2024-10-01 13:47:35.172751] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.141 [2024-10-01 13:47:35.172913] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.141 [2024-10-01 13:47:35.172995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.141 "name": "Existed_Raid", 00:13:25.141 "uuid": "c1609f03-5686-4f60-84d0-c6270406c694", 00:13:25.141 "strip_size_kb": 64, 00:13:25.141 "state": "offline", 00:13:25.141 "raid_level": "concat", 00:13:25.141 "superblock": false, 00:13:25.141 "num_base_bdevs": 4, 00:13:25.141 "num_base_bdevs_discovered": 3, 00:13:25.141 "num_base_bdevs_operational": 3, 00:13:25.141 "base_bdevs_list": [ 00:13:25.141 { 00:13:25.141 "name": null, 00:13:25.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.141 "is_configured": false, 00:13:25.141 "data_offset": 0, 00:13:25.141 "data_size": 65536 00:13:25.141 }, 00:13:25.141 { 00:13:25.141 "name": "BaseBdev2", 00:13:25.141 "uuid": "45c3cf75-e7a4-4657-abf6-3b8bdf6ed6df", 00:13:25.141 "is_configured": true, 00:13:25.141 "data_offset": 0, 00:13:25.141 "data_size": 65536 00:13:25.141 }, 00:13:25.141 { 00:13:25.141 "name": "BaseBdev3", 00:13:25.141 "uuid": "520c98ef-34c1-45de-a9a8-56fbb96e6360", 00:13:25.141 "is_configured": true, 00:13:25.141 "data_offset": 0, 00:13:25.141 "data_size": 65536 00:13:25.141 }, 00:13:25.141 { 00:13:25.141 "name": "BaseBdev4", 00:13:25.141 "uuid": "1530a339-2a26-4fdb-8b36-f4fa5ef985f5", 00:13:25.141 "is_configured": true, 00:13:25.141 "data_offset": 0, 00:13:25.141 "data_size": 65536 00:13:25.141 } 00:13:25.141 ] 00:13:25.141 }' 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.141 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.709 [2024-10-01 13:47:35.771645] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.709 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.968 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.969 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.969 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.969 13:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:25.969 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.969 13:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.969 [2024-10-01 13:47:35.920552] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.969 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.969 [2024-10-01 13:47:36.073058] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:25.969 [2024-10-01 13:47:36.073111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.228 BaseBdev2 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.228 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.228 [ 00:13:26.228 { 00:13:26.228 "name": "BaseBdev2", 00:13:26.228 "aliases": [ 00:13:26.228 "e14a2234-ad7b-44f9-87c1-6328629b9863" 00:13:26.228 ], 00:13:26.228 "product_name": "Malloc disk", 00:13:26.228 "block_size": 512, 00:13:26.228 "num_blocks": 65536, 00:13:26.228 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:26.228 "assigned_rate_limits": { 00:13:26.228 "rw_ios_per_sec": 0, 00:13:26.228 "rw_mbytes_per_sec": 0, 00:13:26.228 "r_mbytes_per_sec": 0, 00:13:26.228 "w_mbytes_per_sec": 0 00:13:26.228 }, 00:13:26.228 "claimed": false, 00:13:26.228 "zoned": false, 00:13:26.228 "supported_io_types": { 00:13:26.228 "read": true, 00:13:26.228 "write": true, 00:13:26.228 "unmap": true, 00:13:26.228 "flush": true, 00:13:26.228 "reset": true, 00:13:26.228 "nvme_admin": false, 00:13:26.228 "nvme_io": false, 00:13:26.228 "nvme_io_md": false, 00:13:26.228 "write_zeroes": true, 00:13:26.228 "zcopy": true, 00:13:26.228 "get_zone_info": false, 00:13:26.228 "zone_management": false, 00:13:26.228 "zone_append": false, 00:13:26.228 "compare": false, 00:13:26.228 "compare_and_write": false, 00:13:26.228 "abort": true, 00:13:26.228 "seek_hole": false, 00:13:26.228 "seek_data": false, 00:13:26.228 "copy": true, 00:13:26.228 "nvme_iov_md": false 00:13:26.228 }, 00:13:26.228 "memory_domains": [ 00:13:26.228 { 00:13:26.228 "dma_device_id": "system", 00:13:26.228 "dma_device_type": 1 00:13:26.228 }, 00:13:26.228 { 00:13:26.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.228 "dma_device_type": 2 00:13:26.228 } 00:13:26.228 ], 00:13:26.228 "driver_specific": {} 00:13:26.229 } 00:13:26.229 ] 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.229 BaseBdev3 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.229 [ 00:13:26.229 { 00:13:26.229 "name": "BaseBdev3", 00:13:26.229 "aliases": [ 00:13:26.229 "eaaef75c-7d8b-4867-884f-4e6922a9274f" 00:13:26.229 ], 00:13:26.229 "product_name": "Malloc disk", 00:13:26.229 "block_size": 512, 00:13:26.229 "num_blocks": 65536, 00:13:26.229 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:26.229 "assigned_rate_limits": { 00:13:26.229 "rw_ios_per_sec": 0, 00:13:26.229 "rw_mbytes_per_sec": 0, 00:13:26.229 "r_mbytes_per_sec": 0, 00:13:26.229 "w_mbytes_per_sec": 0 00:13:26.229 }, 00:13:26.229 "claimed": false, 00:13:26.229 "zoned": false, 00:13:26.229 "supported_io_types": { 00:13:26.229 "read": true, 00:13:26.229 "write": true, 00:13:26.229 "unmap": true, 00:13:26.229 "flush": true, 00:13:26.229 "reset": true, 00:13:26.229 "nvme_admin": false, 00:13:26.229 "nvme_io": false, 00:13:26.229 "nvme_io_md": false, 00:13:26.229 "write_zeroes": true, 00:13:26.229 "zcopy": true, 00:13:26.229 "get_zone_info": false, 00:13:26.229 "zone_management": false, 00:13:26.229 "zone_append": false, 00:13:26.229 "compare": false, 00:13:26.229 "compare_and_write": false, 00:13:26.229 "abort": true, 00:13:26.229 "seek_hole": false, 00:13:26.229 "seek_data": false, 00:13:26.229 "copy": true, 00:13:26.229 "nvme_iov_md": false 00:13:26.229 }, 00:13:26.229 "memory_domains": [ 00:13:26.229 { 00:13:26.229 "dma_device_id": "system", 00:13:26.229 "dma_device_type": 1 00:13:26.229 }, 00:13:26.229 { 00:13:26.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.229 "dma_device_type": 2 00:13:26.229 } 00:13:26.229 ], 00:13:26.229 "driver_specific": {} 00:13:26.229 } 00:13:26.229 ] 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.229 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.488 BaseBdev4 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.488 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.488 [ 00:13:26.488 { 00:13:26.488 "name": "BaseBdev4", 00:13:26.488 "aliases": [ 00:13:26.488 "788d4f49-99a6-457d-aa65-336f68f6fad5" 00:13:26.488 ], 00:13:26.488 "product_name": "Malloc disk", 00:13:26.488 "block_size": 512, 00:13:26.488 "num_blocks": 65536, 00:13:26.488 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:26.489 "assigned_rate_limits": { 00:13:26.489 "rw_ios_per_sec": 0, 00:13:26.489 "rw_mbytes_per_sec": 0, 00:13:26.489 "r_mbytes_per_sec": 0, 00:13:26.489 "w_mbytes_per_sec": 0 00:13:26.489 }, 00:13:26.489 "claimed": false, 00:13:26.489 "zoned": false, 00:13:26.489 "supported_io_types": { 00:13:26.489 "read": true, 00:13:26.489 "write": true, 00:13:26.489 "unmap": true, 00:13:26.489 "flush": true, 00:13:26.489 "reset": true, 00:13:26.489 "nvme_admin": false, 00:13:26.489 "nvme_io": false, 00:13:26.489 "nvme_io_md": false, 00:13:26.489 "write_zeroes": true, 00:13:26.489 "zcopy": true, 00:13:26.489 "get_zone_info": false, 00:13:26.489 "zone_management": false, 00:13:26.489 "zone_append": false, 00:13:26.489 "compare": false, 00:13:26.489 "compare_and_write": false, 00:13:26.489 "abort": true, 00:13:26.489 "seek_hole": false, 00:13:26.489 "seek_data": false, 00:13:26.489 "copy": true, 00:13:26.489 "nvme_iov_md": false 00:13:26.489 }, 00:13:26.489 "memory_domains": [ 00:13:26.489 { 00:13:26.489 "dma_device_id": "system", 00:13:26.489 "dma_device_type": 1 00:13:26.489 }, 00:13:26.489 { 00:13:26.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.489 "dma_device_type": 2 00:13:26.489 } 00:13:26.489 ], 00:13:26.489 "driver_specific": {} 00:13:26.489 } 00:13:26.489 ] 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.489 [2024-10-01 13:47:36.490411] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.489 [2024-10-01 13:47:36.490572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.489 [2024-10-01 13:47:36.490672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.489 [2024-10-01 13:47:36.492807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.489 [2024-10-01 13:47:36.492970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.489 "name": "Existed_Raid", 00:13:26.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.489 "strip_size_kb": 64, 00:13:26.489 "state": "configuring", 00:13:26.489 "raid_level": "concat", 00:13:26.489 "superblock": false, 00:13:26.489 "num_base_bdevs": 4, 00:13:26.489 "num_base_bdevs_discovered": 3, 00:13:26.489 "num_base_bdevs_operational": 4, 00:13:26.489 "base_bdevs_list": [ 00:13:26.489 { 00:13:26.489 "name": "BaseBdev1", 00:13:26.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.489 "is_configured": false, 00:13:26.489 "data_offset": 0, 00:13:26.489 "data_size": 0 00:13:26.489 }, 00:13:26.489 { 00:13:26.489 "name": "BaseBdev2", 00:13:26.489 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:26.489 "is_configured": true, 00:13:26.489 "data_offset": 0, 00:13:26.489 "data_size": 65536 00:13:26.489 }, 00:13:26.489 { 00:13:26.489 "name": "BaseBdev3", 00:13:26.489 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:26.489 "is_configured": true, 00:13:26.489 "data_offset": 0, 00:13:26.489 "data_size": 65536 00:13:26.489 }, 00:13:26.489 { 00:13:26.489 "name": "BaseBdev4", 00:13:26.489 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:26.489 "is_configured": true, 00:13:26.489 "data_offset": 0, 00:13:26.489 "data_size": 65536 00:13:26.489 } 00:13:26.489 ] 00:13:26.489 }' 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.489 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.809 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.809 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.809 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.069 [2024-10-01 13:47:36.973716] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.069 13:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.069 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.069 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.069 "name": "Existed_Raid", 00:13:27.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.069 "strip_size_kb": 64, 00:13:27.069 "state": "configuring", 00:13:27.069 "raid_level": "concat", 00:13:27.069 "superblock": false, 00:13:27.069 "num_base_bdevs": 4, 00:13:27.069 "num_base_bdevs_discovered": 2, 00:13:27.069 "num_base_bdevs_operational": 4, 00:13:27.069 "base_bdevs_list": [ 00:13:27.069 { 00:13:27.069 "name": "BaseBdev1", 00:13:27.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.069 "is_configured": false, 00:13:27.069 "data_offset": 0, 00:13:27.069 "data_size": 0 00:13:27.069 }, 00:13:27.069 { 00:13:27.069 "name": null, 00:13:27.069 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:27.069 "is_configured": false, 00:13:27.069 "data_offset": 0, 00:13:27.069 "data_size": 65536 00:13:27.069 }, 00:13:27.069 { 00:13:27.069 "name": "BaseBdev3", 00:13:27.069 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:27.069 "is_configured": true, 00:13:27.069 "data_offset": 0, 00:13:27.069 "data_size": 65536 00:13:27.069 }, 00:13:27.069 { 00:13:27.069 "name": "BaseBdev4", 00:13:27.069 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:27.069 "is_configured": true, 00:13:27.069 "data_offset": 0, 00:13:27.069 "data_size": 65536 00:13:27.069 } 00:13:27.069 ] 00:13:27.069 }' 00:13:27.069 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.069 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.329 [2024-10-01 13:47:37.471554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.329 BaseBdev1 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.329 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.330 [ 00:13:27.330 { 00:13:27.330 "name": "BaseBdev1", 00:13:27.330 "aliases": [ 00:13:27.330 "2b8a731a-d7d6-4da7-9780-133da05d42e6" 00:13:27.330 ], 00:13:27.330 "product_name": "Malloc disk", 00:13:27.330 "block_size": 512, 00:13:27.330 "num_blocks": 65536, 00:13:27.330 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:27.330 "assigned_rate_limits": { 00:13:27.330 "rw_ios_per_sec": 0, 00:13:27.330 "rw_mbytes_per_sec": 0, 00:13:27.330 "r_mbytes_per_sec": 0, 00:13:27.330 "w_mbytes_per_sec": 0 00:13:27.330 }, 00:13:27.330 "claimed": true, 00:13:27.330 "claim_type": "exclusive_write", 00:13:27.330 "zoned": false, 00:13:27.330 "supported_io_types": { 00:13:27.330 "read": true, 00:13:27.330 "write": true, 00:13:27.330 "unmap": true, 00:13:27.330 "flush": true, 00:13:27.330 "reset": true, 00:13:27.330 "nvme_admin": false, 00:13:27.330 "nvme_io": false, 00:13:27.330 "nvme_io_md": false, 00:13:27.330 "write_zeroes": true, 00:13:27.330 "zcopy": true, 00:13:27.330 "get_zone_info": false, 00:13:27.330 "zone_management": false, 00:13:27.330 "zone_append": false, 00:13:27.330 "compare": false, 00:13:27.330 "compare_and_write": false, 00:13:27.330 "abort": true, 00:13:27.330 "seek_hole": false, 00:13:27.330 "seek_data": false, 00:13:27.330 "copy": true, 00:13:27.330 "nvme_iov_md": false 00:13:27.330 }, 00:13:27.330 "memory_domains": [ 00:13:27.330 { 00:13:27.330 "dma_device_id": "system", 00:13:27.330 "dma_device_type": 1 00:13:27.330 }, 00:13:27.330 { 00:13:27.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.330 "dma_device_type": 2 00:13:27.330 } 00:13:27.330 ], 00:13:27.330 "driver_specific": {} 00:13:27.330 } 00:13:27.330 ] 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.330 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.589 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.589 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.589 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.589 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.589 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.589 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.589 "name": "Existed_Raid", 00:13:27.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.589 "strip_size_kb": 64, 00:13:27.589 "state": "configuring", 00:13:27.589 "raid_level": "concat", 00:13:27.589 "superblock": false, 00:13:27.589 "num_base_bdevs": 4, 00:13:27.589 "num_base_bdevs_discovered": 3, 00:13:27.589 "num_base_bdevs_operational": 4, 00:13:27.589 "base_bdevs_list": [ 00:13:27.589 { 00:13:27.589 "name": "BaseBdev1", 00:13:27.589 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:27.589 "is_configured": true, 00:13:27.589 "data_offset": 0, 00:13:27.589 "data_size": 65536 00:13:27.590 }, 00:13:27.590 { 00:13:27.590 "name": null, 00:13:27.590 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:27.590 "is_configured": false, 00:13:27.590 "data_offset": 0, 00:13:27.590 "data_size": 65536 00:13:27.590 }, 00:13:27.590 { 00:13:27.590 "name": "BaseBdev3", 00:13:27.590 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:27.590 "is_configured": true, 00:13:27.590 "data_offset": 0, 00:13:27.590 "data_size": 65536 00:13:27.590 }, 00:13:27.590 { 00:13:27.590 "name": "BaseBdev4", 00:13:27.590 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:27.590 "is_configured": true, 00:13:27.590 "data_offset": 0, 00:13:27.590 "data_size": 65536 00:13:27.590 } 00:13:27.590 ] 00:13:27.590 }' 00:13:27.590 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.590 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.850 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.850 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.850 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.850 13:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.850 13:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.850 [2024-10-01 13:47:38.019415] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.850 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.110 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.110 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.110 "name": "Existed_Raid", 00:13:28.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.110 "strip_size_kb": 64, 00:13:28.110 "state": "configuring", 00:13:28.110 "raid_level": "concat", 00:13:28.110 "superblock": false, 00:13:28.110 "num_base_bdevs": 4, 00:13:28.110 "num_base_bdevs_discovered": 2, 00:13:28.110 "num_base_bdevs_operational": 4, 00:13:28.110 "base_bdevs_list": [ 00:13:28.110 { 00:13:28.110 "name": "BaseBdev1", 00:13:28.110 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:28.110 "is_configured": true, 00:13:28.110 "data_offset": 0, 00:13:28.110 "data_size": 65536 00:13:28.110 }, 00:13:28.110 { 00:13:28.110 "name": null, 00:13:28.110 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:28.110 "is_configured": false, 00:13:28.110 "data_offset": 0, 00:13:28.110 "data_size": 65536 00:13:28.110 }, 00:13:28.110 { 00:13:28.110 "name": null, 00:13:28.110 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:28.110 "is_configured": false, 00:13:28.110 "data_offset": 0, 00:13:28.110 "data_size": 65536 00:13:28.110 }, 00:13:28.110 { 00:13:28.110 "name": "BaseBdev4", 00:13:28.110 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:28.110 "is_configured": true, 00:13:28.110 "data_offset": 0, 00:13:28.110 "data_size": 65536 00:13:28.110 } 00:13:28.110 ] 00:13:28.110 }' 00:13:28.110 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.110 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.370 [2024-10-01 13:47:38.506727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.370 "name": "Existed_Raid", 00:13:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.370 "strip_size_kb": 64, 00:13:28.370 "state": "configuring", 00:13:28.370 "raid_level": "concat", 00:13:28.370 "superblock": false, 00:13:28.370 "num_base_bdevs": 4, 00:13:28.370 "num_base_bdevs_discovered": 3, 00:13:28.370 "num_base_bdevs_operational": 4, 00:13:28.370 "base_bdevs_list": [ 00:13:28.370 { 00:13:28.370 "name": "BaseBdev1", 00:13:28.370 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:28.370 "is_configured": true, 00:13:28.370 "data_offset": 0, 00:13:28.370 "data_size": 65536 00:13:28.370 }, 00:13:28.370 { 00:13:28.370 "name": null, 00:13:28.370 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:28.370 "is_configured": false, 00:13:28.370 "data_offset": 0, 00:13:28.370 "data_size": 65536 00:13:28.370 }, 00:13:28.370 { 00:13:28.370 "name": "BaseBdev3", 00:13:28.370 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:28.370 "is_configured": true, 00:13:28.370 "data_offset": 0, 00:13:28.370 "data_size": 65536 00:13:28.370 }, 00:13:28.370 { 00:13:28.370 "name": "BaseBdev4", 00:13:28.370 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:28.370 "is_configured": true, 00:13:28.370 "data_offset": 0, 00:13:28.370 "data_size": 65536 00:13:28.370 } 00:13:28.370 ] 00:13:28.370 }' 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.370 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.939 13:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 [2024-10-01 13:47:38.954103] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.939 "name": "Existed_Raid", 00:13:28.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.939 "strip_size_kb": 64, 00:13:28.939 "state": "configuring", 00:13:28.939 "raid_level": "concat", 00:13:28.939 "superblock": false, 00:13:28.939 "num_base_bdevs": 4, 00:13:28.939 "num_base_bdevs_discovered": 2, 00:13:28.939 "num_base_bdevs_operational": 4, 00:13:28.939 "base_bdevs_list": [ 00:13:28.939 { 00:13:28.939 "name": null, 00:13:28.939 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:28.939 "is_configured": false, 00:13:28.939 "data_offset": 0, 00:13:28.939 "data_size": 65536 00:13:28.939 }, 00:13:28.939 { 00:13:28.939 "name": null, 00:13:28.939 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:28.939 "is_configured": false, 00:13:28.939 "data_offset": 0, 00:13:28.939 "data_size": 65536 00:13:28.939 }, 00:13:28.939 { 00:13:28.939 "name": "BaseBdev3", 00:13:28.939 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:28.939 "is_configured": true, 00:13:28.939 "data_offset": 0, 00:13:28.939 "data_size": 65536 00:13:28.939 }, 00:13:28.939 { 00:13:28.939 "name": "BaseBdev4", 00:13:28.939 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:28.939 "is_configured": true, 00:13:28.939 "data_offset": 0, 00:13:28.939 "data_size": 65536 00:13:28.939 } 00:13:28.939 ] 00:13:28.939 }' 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.939 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.507 [2024-10-01 13:47:39.518085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.507 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.508 "name": "Existed_Raid", 00:13:29.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.508 "strip_size_kb": 64, 00:13:29.508 "state": "configuring", 00:13:29.508 "raid_level": "concat", 00:13:29.508 "superblock": false, 00:13:29.508 "num_base_bdevs": 4, 00:13:29.508 "num_base_bdevs_discovered": 3, 00:13:29.508 "num_base_bdevs_operational": 4, 00:13:29.508 "base_bdevs_list": [ 00:13:29.508 { 00:13:29.508 "name": null, 00:13:29.508 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:29.508 "is_configured": false, 00:13:29.508 "data_offset": 0, 00:13:29.508 "data_size": 65536 00:13:29.508 }, 00:13:29.508 { 00:13:29.508 "name": "BaseBdev2", 00:13:29.508 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:29.508 "is_configured": true, 00:13:29.508 "data_offset": 0, 00:13:29.508 "data_size": 65536 00:13:29.508 }, 00:13:29.508 { 00:13:29.508 "name": "BaseBdev3", 00:13:29.508 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:29.508 "is_configured": true, 00:13:29.508 "data_offset": 0, 00:13:29.508 "data_size": 65536 00:13:29.508 }, 00:13:29.508 { 00:13:29.508 "name": "BaseBdev4", 00:13:29.508 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:29.508 "is_configured": true, 00:13:29.508 "data_offset": 0, 00:13:29.508 "data_size": 65536 00:13:29.508 } 00:13:29.508 ] 00:13:29.508 }' 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.508 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.766 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.766 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.766 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.026 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:30.026 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.026 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.026 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.026 13:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:30.026 13:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b8a731a-d7d6-4da7-9780-133da05d42e6 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.026 [2024-10-01 13:47:40.065944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:30.026 [2024-10-01 13:47:40.065995] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:30.026 [2024-10-01 13:47:40.066005] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:30.026 [2024-10-01 13:47:40.066274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:30.026 [2024-10-01 13:47:40.066424] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:30.026 [2024-10-01 13:47:40.066439] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:30.026 [2024-10-01 13:47:40.066723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.026 NewBaseBdev 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.026 [ 00:13:30.026 { 00:13:30.026 "name": "NewBaseBdev", 00:13:30.026 "aliases": [ 00:13:30.026 "2b8a731a-d7d6-4da7-9780-133da05d42e6" 00:13:30.026 ], 00:13:30.026 "product_name": "Malloc disk", 00:13:30.026 "block_size": 512, 00:13:30.026 "num_blocks": 65536, 00:13:30.026 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:30.026 "assigned_rate_limits": { 00:13:30.026 "rw_ios_per_sec": 0, 00:13:30.026 "rw_mbytes_per_sec": 0, 00:13:30.026 "r_mbytes_per_sec": 0, 00:13:30.026 "w_mbytes_per_sec": 0 00:13:30.026 }, 00:13:30.026 "claimed": true, 00:13:30.026 "claim_type": "exclusive_write", 00:13:30.026 "zoned": false, 00:13:30.026 "supported_io_types": { 00:13:30.026 "read": true, 00:13:30.026 "write": true, 00:13:30.026 "unmap": true, 00:13:30.026 "flush": true, 00:13:30.026 "reset": true, 00:13:30.026 "nvme_admin": false, 00:13:30.026 "nvme_io": false, 00:13:30.026 "nvme_io_md": false, 00:13:30.026 "write_zeroes": true, 00:13:30.026 "zcopy": true, 00:13:30.026 "get_zone_info": false, 00:13:30.026 "zone_management": false, 00:13:30.026 "zone_append": false, 00:13:30.026 "compare": false, 00:13:30.026 "compare_and_write": false, 00:13:30.026 "abort": true, 00:13:30.026 "seek_hole": false, 00:13:30.026 "seek_data": false, 00:13:30.026 "copy": true, 00:13:30.026 "nvme_iov_md": false 00:13:30.026 }, 00:13:30.026 "memory_domains": [ 00:13:30.026 { 00:13:30.026 "dma_device_id": "system", 00:13:30.026 "dma_device_type": 1 00:13:30.026 }, 00:13:30.026 { 00:13:30.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.026 "dma_device_type": 2 00:13:30.026 } 00:13:30.026 ], 00:13:30.026 "driver_specific": {} 00:13:30.026 } 00:13:30.026 ] 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:30.026 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.027 "name": "Existed_Raid", 00:13:30.027 "uuid": "1ac386b7-16ee-4365-808f-cc9c8abc9576", 00:13:30.027 "strip_size_kb": 64, 00:13:30.027 "state": "online", 00:13:30.027 "raid_level": "concat", 00:13:30.027 "superblock": false, 00:13:30.027 "num_base_bdevs": 4, 00:13:30.027 "num_base_bdevs_discovered": 4, 00:13:30.027 "num_base_bdevs_operational": 4, 00:13:30.027 "base_bdevs_list": [ 00:13:30.027 { 00:13:30.027 "name": "NewBaseBdev", 00:13:30.027 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:30.027 "is_configured": true, 00:13:30.027 "data_offset": 0, 00:13:30.027 "data_size": 65536 00:13:30.027 }, 00:13:30.027 { 00:13:30.027 "name": "BaseBdev2", 00:13:30.027 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:30.027 "is_configured": true, 00:13:30.027 "data_offset": 0, 00:13:30.027 "data_size": 65536 00:13:30.027 }, 00:13:30.027 { 00:13:30.027 "name": "BaseBdev3", 00:13:30.027 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:30.027 "is_configured": true, 00:13:30.027 "data_offset": 0, 00:13:30.027 "data_size": 65536 00:13:30.027 }, 00:13:30.027 { 00:13:30.027 "name": "BaseBdev4", 00:13:30.027 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:30.027 "is_configured": true, 00:13:30.027 "data_offset": 0, 00:13:30.027 "data_size": 65536 00:13:30.027 } 00:13:30.027 ] 00:13:30.027 }' 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.027 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.597 [2024-10-01 13:47:40.505777] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.597 "name": "Existed_Raid", 00:13:30.597 "aliases": [ 00:13:30.597 "1ac386b7-16ee-4365-808f-cc9c8abc9576" 00:13:30.597 ], 00:13:30.597 "product_name": "Raid Volume", 00:13:30.597 "block_size": 512, 00:13:30.597 "num_blocks": 262144, 00:13:30.597 "uuid": "1ac386b7-16ee-4365-808f-cc9c8abc9576", 00:13:30.597 "assigned_rate_limits": { 00:13:30.597 "rw_ios_per_sec": 0, 00:13:30.597 "rw_mbytes_per_sec": 0, 00:13:30.597 "r_mbytes_per_sec": 0, 00:13:30.597 "w_mbytes_per_sec": 0 00:13:30.597 }, 00:13:30.597 "claimed": false, 00:13:30.597 "zoned": false, 00:13:30.597 "supported_io_types": { 00:13:30.597 "read": true, 00:13:30.597 "write": true, 00:13:30.597 "unmap": true, 00:13:30.597 "flush": true, 00:13:30.597 "reset": true, 00:13:30.597 "nvme_admin": false, 00:13:30.597 "nvme_io": false, 00:13:30.597 "nvme_io_md": false, 00:13:30.597 "write_zeroes": true, 00:13:30.597 "zcopy": false, 00:13:30.597 "get_zone_info": false, 00:13:30.597 "zone_management": false, 00:13:30.597 "zone_append": false, 00:13:30.597 "compare": false, 00:13:30.597 "compare_and_write": false, 00:13:30.597 "abort": false, 00:13:30.597 "seek_hole": false, 00:13:30.597 "seek_data": false, 00:13:30.597 "copy": false, 00:13:30.597 "nvme_iov_md": false 00:13:30.597 }, 00:13:30.597 "memory_domains": [ 00:13:30.597 { 00:13:30.597 "dma_device_id": "system", 00:13:30.597 "dma_device_type": 1 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.597 "dma_device_type": 2 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "dma_device_id": "system", 00:13:30.597 "dma_device_type": 1 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.597 "dma_device_type": 2 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "dma_device_id": "system", 00:13:30.597 "dma_device_type": 1 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.597 "dma_device_type": 2 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "dma_device_id": "system", 00:13:30.597 "dma_device_type": 1 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.597 "dma_device_type": 2 00:13:30.597 } 00:13:30.597 ], 00:13:30.597 "driver_specific": { 00:13:30.597 "raid": { 00:13:30.597 "uuid": "1ac386b7-16ee-4365-808f-cc9c8abc9576", 00:13:30.597 "strip_size_kb": 64, 00:13:30.597 "state": "online", 00:13:30.597 "raid_level": "concat", 00:13:30.597 "superblock": false, 00:13:30.597 "num_base_bdevs": 4, 00:13:30.597 "num_base_bdevs_discovered": 4, 00:13:30.597 "num_base_bdevs_operational": 4, 00:13:30.597 "base_bdevs_list": [ 00:13:30.597 { 00:13:30.597 "name": "NewBaseBdev", 00:13:30.597 "uuid": "2b8a731a-d7d6-4da7-9780-133da05d42e6", 00:13:30.597 "is_configured": true, 00:13:30.597 "data_offset": 0, 00:13:30.597 "data_size": 65536 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "name": "BaseBdev2", 00:13:30.597 "uuid": "e14a2234-ad7b-44f9-87c1-6328629b9863", 00:13:30.597 "is_configured": true, 00:13:30.597 "data_offset": 0, 00:13:30.597 "data_size": 65536 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "name": "BaseBdev3", 00:13:30.597 "uuid": "eaaef75c-7d8b-4867-884f-4e6922a9274f", 00:13:30.597 "is_configured": true, 00:13:30.597 "data_offset": 0, 00:13:30.597 "data_size": 65536 00:13:30.597 }, 00:13:30.597 { 00:13:30.597 "name": "BaseBdev4", 00:13:30.597 "uuid": "788d4f49-99a6-457d-aa65-336f68f6fad5", 00:13:30.597 "is_configured": true, 00:13:30.597 "data_offset": 0, 00:13:30.597 "data_size": 65536 00:13:30.597 } 00:13:30.597 ] 00:13:30.597 } 00:13:30.597 } 00:13:30.597 }' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.597 BaseBdev2 00:13:30.597 BaseBdev3 00:13:30.597 BaseBdev4' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.597 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.598 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.857 [2024-10-01 13:47:40.812967] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.857 [2024-10-01 13:47:40.813007] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.857 [2024-10-01 13:47:40.813091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.857 [2024-10-01 13:47:40.813160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.857 [2024-10-01 13:47:40.813173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71171 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71171 ']' 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71171 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71171 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71171' 00:13:30.857 killing process with pid 71171 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71171 00:13:30.857 [2024-10-01 13:47:40.864439] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.857 13:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71171 00:13:31.115 [2024-10-01 13:47:41.271988] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:32.493 ************************************ 00:13:32.493 END TEST raid_state_function_test 00:13:32.493 ************************************ 00:13:32.493 00:13:32.493 real 0m11.469s 00:13:32.493 user 0m17.979s 00:13:32.493 sys 0m2.305s 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.493 13:47:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:32.493 13:47:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:32.493 13:47:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.493 13:47:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.493 ************************************ 00:13:32.493 START TEST raid_state_function_test_sb 00:13:32.493 ************************************ 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:32.493 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71846 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71846' 00:13:32.494 Process raid pid: 71846 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71846 00:13:32.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71846 ']' 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.494 13:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.752 [2024-10-01 13:47:42.746390] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:32.752 [2024-10-01 13:47:42.746736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.752 [2024-10-01 13:47:42.919546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.011 [2024-10-01 13:47:43.142173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.270 [2024-10-01 13:47:43.360644] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.270 [2024-10-01 13:47:43.360686] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.529 [2024-10-01 13:47:43.587762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.529 [2024-10-01 13:47:43.587819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.529 [2024-10-01 13:47:43.587834] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.529 [2024-10-01 13:47:43.587848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.529 [2024-10-01 13:47:43.587856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.529 [2024-10-01 13:47:43.587868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.529 [2024-10-01 13:47:43.587876] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:33.529 [2024-10-01 13:47:43.587890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.529 "name": "Existed_Raid", 00:13:33.529 "uuid": "e435020a-cc58-4a99-bb30-6b245e13192e", 00:13:33.529 "strip_size_kb": 64, 00:13:33.529 "state": "configuring", 00:13:33.529 "raid_level": "concat", 00:13:33.529 "superblock": true, 00:13:33.529 "num_base_bdevs": 4, 00:13:33.529 "num_base_bdevs_discovered": 0, 00:13:33.529 "num_base_bdevs_operational": 4, 00:13:33.529 "base_bdevs_list": [ 00:13:33.529 { 00:13:33.529 "name": "BaseBdev1", 00:13:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.529 "is_configured": false, 00:13:33.529 "data_offset": 0, 00:13:33.529 "data_size": 0 00:13:33.529 }, 00:13:33.529 { 00:13:33.529 "name": "BaseBdev2", 00:13:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.529 "is_configured": false, 00:13:33.529 "data_offset": 0, 00:13:33.529 "data_size": 0 00:13:33.529 }, 00:13:33.529 { 00:13:33.529 "name": "BaseBdev3", 00:13:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.529 "is_configured": false, 00:13:33.529 "data_offset": 0, 00:13:33.529 "data_size": 0 00:13:33.529 }, 00:13:33.529 { 00:13:33.529 "name": "BaseBdev4", 00:13:33.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.529 "is_configured": false, 00:13:33.529 "data_offset": 0, 00:13:33.529 "data_size": 0 00:13:33.529 } 00:13:33.529 ] 00:13:33.529 }' 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.529 13:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.097 [2024-10-01 13:47:44.011179] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.097 [2024-10-01 13:47:44.011225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.097 [2024-10-01 13:47:44.019198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.097 [2024-10-01 13:47:44.019245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.097 [2024-10-01 13:47:44.019255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.097 [2024-10-01 13:47:44.019268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.097 [2024-10-01 13:47:44.019275] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.097 [2024-10-01 13:47:44.019287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.097 [2024-10-01 13:47:44.019294] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.097 [2024-10-01 13:47:44.019306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.097 [2024-10-01 13:47:44.075779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.097 BaseBdev1 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.097 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.097 [ 00:13:34.097 { 00:13:34.097 "name": "BaseBdev1", 00:13:34.097 "aliases": [ 00:13:34.097 "47371fcf-14d8-465b-8bf3-45b511ae25f1" 00:13:34.098 ], 00:13:34.098 "product_name": "Malloc disk", 00:13:34.098 "block_size": 512, 00:13:34.098 "num_blocks": 65536, 00:13:34.098 "uuid": "47371fcf-14d8-465b-8bf3-45b511ae25f1", 00:13:34.098 "assigned_rate_limits": { 00:13:34.098 "rw_ios_per_sec": 0, 00:13:34.098 "rw_mbytes_per_sec": 0, 00:13:34.098 "r_mbytes_per_sec": 0, 00:13:34.098 "w_mbytes_per_sec": 0 00:13:34.098 }, 00:13:34.098 "claimed": true, 00:13:34.098 "claim_type": "exclusive_write", 00:13:34.098 "zoned": false, 00:13:34.098 "supported_io_types": { 00:13:34.098 "read": true, 00:13:34.098 "write": true, 00:13:34.098 "unmap": true, 00:13:34.098 "flush": true, 00:13:34.098 "reset": true, 00:13:34.098 "nvme_admin": false, 00:13:34.098 "nvme_io": false, 00:13:34.098 "nvme_io_md": false, 00:13:34.098 "write_zeroes": true, 00:13:34.098 "zcopy": true, 00:13:34.098 "get_zone_info": false, 00:13:34.098 "zone_management": false, 00:13:34.098 "zone_append": false, 00:13:34.098 "compare": false, 00:13:34.098 "compare_and_write": false, 00:13:34.098 "abort": true, 00:13:34.098 "seek_hole": false, 00:13:34.098 "seek_data": false, 00:13:34.098 "copy": true, 00:13:34.098 "nvme_iov_md": false 00:13:34.098 }, 00:13:34.098 "memory_domains": [ 00:13:34.098 { 00:13:34.098 "dma_device_id": "system", 00:13:34.098 "dma_device_type": 1 00:13:34.098 }, 00:13:34.098 { 00:13:34.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.098 "dma_device_type": 2 00:13:34.098 } 00:13:34.098 ], 00:13:34.098 "driver_specific": {} 00:13:34.098 } 00:13:34.098 ] 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.098 "name": "Existed_Raid", 00:13:34.098 "uuid": "6ed8b846-48f5-44eb-a566-a03ca1b42405", 00:13:34.098 "strip_size_kb": 64, 00:13:34.098 "state": "configuring", 00:13:34.098 "raid_level": "concat", 00:13:34.098 "superblock": true, 00:13:34.098 "num_base_bdevs": 4, 00:13:34.098 "num_base_bdevs_discovered": 1, 00:13:34.098 "num_base_bdevs_operational": 4, 00:13:34.098 "base_bdevs_list": [ 00:13:34.098 { 00:13:34.098 "name": "BaseBdev1", 00:13:34.098 "uuid": "47371fcf-14d8-465b-8bf3-45b511ae25f1", 00:13:34.098 "is_configured": true, 00:13:34.098 "data_offset": 2048, 00:13:34.098 "data_size": 63488 00:13:34.098 }, 00:13:34.098 { 00:13:34.098 "name": "BaseBdev2", 00:13:34.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.098 "is_configured": false, 00:13:34.098 "data_offset": 0, 00:13:34.098 "data_size": 0 00:13:34.098 }, 00:13:34.098 { 00:13:34.098 "name": "BaseBdev3", 00:13:34.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.098 "is_configured": false, 00:13:34.098 "data_offset": 0, 00:13:34.098 "data_size": 0 00:13:34.098 }, 00:13:34.098 { 00:13:34.098 "name": "BaseBdev4", 00:13:34.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.098 "is_configured": false, 00:13:34.098 "data_offset": 0, 00:13:34.098 "data_size": 0 00:13:34.098 } 00:13:34.098 ] 00:13:34.098 }' 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.098 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.357 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.357 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.357 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.357 [2024-10-01 13:47:44.547207] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.616 [2024-10-01 13:47:44.547418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:34.616 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.616 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.616 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.616 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.616 [2024-10-01 13:47:44.563254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.616 [2024-10-01 13:47:44.565530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.616 [2024-10-01 13:47:44.565672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.616 [2024-10-01 13:47:44.565796] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.616 [2024-10-01 13:47:44.565849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.617 [2024-10-01 13:47:44.565930] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.617 [2024-10-01 13:47:44.565972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.617 "name": "Existed_Raid", 00:13:34.617 "uuid": "2d5caec5-b994-4660-8f37-60da1e20dc0a", 00:13:34.617 "strip_size_kb": 64, 00:13:34.617 "state": "configuring", 00:13:34.617 "raid_level": "concat", 00:13:34.617 "superblock": true, 00:13:34.617 "num_base_bdevs": 4, 00:13:34.617 "num_base_bdevs_discovered": 1, 00:13:34.617 "num_base_bdevs_operational": 4, 00:13:34.617 "base_bdevs_list": [ 00:13:34.617 { 00:13:34.617 "name": "BaseBdev1", 00:13:34.617 "uuid": "47371fcf-14d8-465b-8bf3-45b511ae25f1", 00:13:34.617 "is_configured": true, 00:13:34.617 "data_offset": 2048, 00:13:34.617 "data_size": 63488 00:13:34.617 }, 00:13:34.617 { 00:13:34.617 "name": "BaseBdev2", 00:13:34.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.617 "is_configured": false, 00:13:34.617 "data_offset": 0, 00:13:34.617 "data_size": 0 00:13:34.617 }, 00:13:34.617 { 00:13:34.617 "name": "BaseBdev3", 00:13:34.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.617 "is_configured": false, 00:13:34.617 "data_offset": 0, 00:13:34.617 "data_size": 0 00:13:34.617 }, 00:13:34.617 { 00:13:34.617 "name": "BaseBdev4", 00:13:34.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.617 "is_configured": false, 00:13:34.617 "data_offset": 0, 00:13:34.617 "data_size": 0 00:13:34.617 } 00:13:34.617 ] 00:13:34.617 }' 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.617 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.876 13:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:34.876 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.876 13:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.876 [2024-10-01 13:47:45.001829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.876 BaseBdev2 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.876 [ 00:13:34.876 { 00:13:34.876 "name": "BaseBdev2", 00:13:34.876 "aliases": [ 00:13:34.876 "7f682425-1f8e-4318-837a-eac840bfc8d8" 00:13:34.876 ], 00:13:34.876 "product_name": "Malloc disk", 00:13:34.876 "block_size": 512, 00:13:34.876 "num_blocks": 65536, 00:13:34.876 "uuid": "7f682425-1f8e-4318-837a-eac840bfc8d8", 00:13:34.876 "assigned_rate_limits": { 00:13:34.876 "rw_ios_per_sec": 0, 00:13:34.876 "rw_mbytes_per_sec": 0, 00:13:34.876 "r_mbytes_per_sec": 0, 00:13:34.876 "w_mbytes_per_sec": 0 00:13:34.876 }, 00:13:34.876 "claimed": true, 00:13:34.876 "claim_type": "exclusive_write", 00:13:34.876 "zoned": false, 00:13:34.876 "supported_io_types": { 00:13:34.876 "read": true, 00:13:34.876 "write": true, 00:13:34.876 "unmap": true, 00:13:34.876 "flush": true, 00:13:34.876 "reset": true, 00:13:34.876 "nvme_admin": false, 00:13:34.876 "nvme_io": false, 00:13:34.876 "nvme_io_md": false, 00:13:34.876 "write_zeroes": true, 00:13:34.876 "zcopy": true, 00:13:34.876 "get_zone_info": false, 00:13:34.876 "zone_management": false, 00:13:34.876 "zone_append": false, 00:13:34.876 "compare": false, 00:13:34.876 "compare_and_write": false, 00:13:34.876 "abort": true, 00:13:34.876 "seek_hole": false, 00:13:34.876 "seek_data": false, 00:13:34.876 "copy": true, 00:13:34.876 "nvme_iov_md": false 00:13:34.876 }, 00:13:34.876 "memory_domains": [ 00:13:34.876 { 00:13:34.876 "dma_device_id": "system", 00:13:34.876 "dma_device_type": 1 00:13:34.876 }, 00:13:34.876 { 00:13:34.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.876 "dma_device_type": 2 00:13:34.876 } 00:13:34.876 ], 00:13:34.876 "driver_specific": {} 00:13:34.876 } 00:13:34.876 ] 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:34.876 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.877 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.136 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.136 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.136 "name": "Existed_Raid", 00:13:35.136 "uuid": "2d5caec5-b994-4660-8f37-60da1e20dc0a", 00:13:35.136 "strip_size_kb": 64, 00:13:35.136 "state": "configuring", 00:13:35.136 "raid_level": "concat", 00:13:35.136 "superblock": true, 00:13:35.136 "num_base_bdevs": 4, 00:13:35.136 "num_base_bdevs_discovered": 2, 00:13:35.136 "num_base_bdevs_operational": 4, 00:13:35.136 "base_bdevs_list": [ 00:13:35.136 { 00:13:35.136 "name": "BaseBdev1", 00:13:35.136 "uuid": "47371fcf-14d8-465b-8bf3-45b511ae25f1", 00:13:35.136 "is_configured": true, 00:13:35.136 "data_offset": 2048, 00:13:35.136 "data_size": 63488 00:13:35.136 }, 00:13:35.136 { 00:13:35.136 "name": "BaseBdev2", 00:13:35.136 "uuid": "7f682425-1f8e-4318-837a-eac840bfc8d8", 00:13:35.136 "is_configured": true, 00:13:35.136 "data_offset": 2048, 00:13:35.136 "data_size": 63488 00:13:35.136 }, 00:13:35.136 { 00:13:35.136 "name": "BaseBdev3", 00:13:35.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.136 "is_configured": false, 00:13:35.136 "data_offset": 0, 00:13:35.136 "data_size": 0 00:13:35.136 }, 00:13:35.136 { 00:13:35.136 "name": "BaseBdev4", 00:13:35.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.136 "is_configured": false, 00:13:35.136 "data_offset": 0, 00:13:35.136 "data_size": 0 00:13:35.136 } 00:13:35.136 ] 00:13:35.136 }' 00:13:35.136 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.136 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.426 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:35.426 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.426 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.427 [2024-10-01 13:47:45.553221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.427 BaseBdev3 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.427 [ 00:13:35.427 { 00:13:35.427 "name": "BaseBdev3", 00:13:35.427 "aliases": [ 00:13:35.427 "b9b0e1d1-8dc6-478e-9333-37a55226b572" 00:13:35.427 ], 00:13:35.427 "product_name": "Malloc disk", 00:13:35.427 "block_size": 512, 00:13:35.427 "num_blocks": 65536, 00:13:35.427 "uuid": "b9b0e1d1-8dc6-478e-9333-37a55226b572", 00:13:35.427 "assigned_rate_limits": { 00:13:35.427 "rw_ios_per_sec": 0, 00:13:35.427 "rw_mbytes_per_sec": 0, 00:13:35.427 "r_mbytes_per_sec": 0, 00:13:35.427 "w_mbytes_per_sec": 0 00:13:35.427 }, 00:13:35.427 "claimed": true, 00:13:35.427 "claim_type": "exclusive_write", 00:13:35.427 "zoned": false, 00:13:35.427 "supported_io_types": { 00:13:35.427 "read": true, 00:13:35.427 "write": true, 00:13:35.427 "unmap": true, 00:13:35.427 "flush": true, 00:13:35.427 "reset": true, 00:13:35.427 "nvme_admin": false, 00:13:35.427 "nvme_io": false, 00:13:35.427 "nvme_io_md": false, 00:13:35.427 "write_zeroes": true, 00:13:35.427 "zcopy": true, 00:13:35.427 "get_zone_info": false, 00:13:35.427 "zone_management": false, 00:13:35.427 "zone_append": false, 00:13:35.427 "compare": false, 00:13:35.427 "compare_and_write": false, 00:13:35.427 "abort": true, 00:13:35.427 "seek_hole": false, 00:13:35.427 "seek_data": false, 00:13:35.427 "copy": true, 00:13:35.427 "nvme_iov_md": false 00:13:35.427 }, 00:13:35.427 "memory_domains": [ 00:13:35.427 { 00:13:35.427 "dma_device_id": "system", 00:13:35.427 "dma_device_type": 1 00:13:35.427 }, 00:13:35.427 { 00:13:35.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.427 "dma_device_type": 2 00:13:35.427 } 00:13:35.427 ], 00:13:35.427 "driver_specific": {} 00:13:35.427 } 00:13:35.427 ] 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.427 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.685 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.685 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.685 "name": "Existed_Raid", 00:13:35.685 "uuid": "2d5caec5-b994-4660-8f37-60da1e20dc0a", 00:13:35.685 "strip_size_kb": 64, 00:13:35.685 "state": "configuring", 00:13:35.685 "raid_level": "concat", 00:13:35.685 "superblock": true, 00:13:35.685 "num_base_bdevs": 4, 00:13:35.685 "num_base_bdevs_discovered": 3, 00:13:35.685 "num_base_bdevs_operational": 4, 00:13:35.685 "base_bdevs_list": [ 00:13:35.685 { 00:13:35.685 "name": "BaseBdev1", 00:13:35.685 "uuid": "47371fcf-14d8-465b-8bf3-45b511ae25f1", 00:13:35.685 "is_configured": true, 00:13:35.685 "data_offset": 2048, 00:13:35.685 "data_size": 63488 00:13:35.685 }, 00:13:35.685 { 00:13:35.685 "name": "BaseBdev2", 00:13:35.685 "uuid": "7f682425-1f8e-4318-837a-eac840bfc8d8", 00:13:35.685 "is_configured": true, 00:13:35.685 "data_offset": 2048, 00:13:35.685 "data_size": 63488 00:13:35.685 }, 00:13:35.685 { 00:13:35.685 "name": "BaseBdev3", 00:13:35.685 "uuid": "b9b0e1d1-8dc6-478e-9333-37a55226b572", 00:13:35.685 "is_configured": true, 00:13:35.685 "data_offset": 2048, 00:13:35.685 "data_size": 63488 00:13:35.685 }, 00:13:35.685 { 00:13:35.685 "name": "BaseBdev4", 00:13:35.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.685 "is_configured": false, 00:13:35.685 "data_offset": 0, 00:13:35.685 "data_size": 0 00:13:35.685 } 00:13:35.685 ] 00:13:35.685 }' 00:13:35.685 13:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.685 13:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.946 [2024-10-01 13:47:46.053938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.946 [2024-10-01 13:47:46.054232] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.946 [2024-10-01 13:47:46.054249] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:35.946 [2024-10-01 13:47:46.054561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:35.946 [2024-10-01 13:47:46.054732] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.946 [2024-10-01 13:47:46.054749] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:35.946 BaseBdev4 00:13:35.946 [2024-10-01 13:47:46.054898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.946 [ 00:13:35.946 { 00:13:35.946 "name": "BaseBdev4", 00:13:35.946 "aliases": [ 00:13:35.946 "71a64d0f-645e-4763-85c5-f85fb21bf201" 00:13:35.946 ], 00:13:35.946 "product_name": "Malloc disk", 00:13:35.946 "block_size": 512, 00:13:35.946 "num_blocks": 65536, 00:13:35.946 "uuid": "71a64d0f-645e-4763-85c5-f85fb21bf201", 00:13:35.946 "assigned_rate_limits": { 00:13:35.946 "rw_ios_per_sec": 0, 00:13:35.946 "rw_mbytes_per_sec": 0, 00:13:35.946 "r_mbytes_per_sec": 0, 00:13:35.946 "w_mbytes_per_sec": 0 00:13:35.946 }, 00:13:35.946 "claimed": true, 00:13:35.946 "claim_type": "exclusive_write", 00:13:35.946 "zoned": false, 00:13:35.946 "supported_io_types": { 00:13:35.946 "read": true, 00:13:35.946 "write": true, 00:13:35.946 "unmap": true, 00:13:35.946 "flush": true, 00:13:35.946 "reset": true, 00:13:35.946 "nvme_admin": false, 00:13:35.946 "nvme_io": false, 00:13:35.946 "nvme_io_md": false, 00:13:35.946 "write_zeroes": true, 00:13:35.946 "zcopy": true, 00:13:35.946 "get_zone_info": false, 00:13:35.946 "zone_management": false, 00:13:35.946 "zone_append": false, 00:13:35.946 "compare": false, 00:13:35.946 "compare_and_write": false, 00:13:35.946 "abort": true, 00:13:35.946 "seek_hole": false, 00:13:35.946 "seek_data": false, 00:13:35.946 "copy": true, 00:13:35.946 "nvme_iov_md": false 00:13:35.946 }, 00:13:35.946 "memory_domains": [ 00:13:35.946 { 00:13:35.946 "dma_device_id": "system", 00:13:35.946 "dma_device_type": 1 00:13:35.946 }, 00:13:35.946 { 00:13:35.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.946 "dma_device_type": 2 00:13:35.946 } 00:13:35.946 ], 00:13:35.946 "driver_specific": {} 00:13:35.946 } 00:13:35.946 ] 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.946 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.205 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.205 "name": "Existed_Raid", 00:13:36.205 "uuid": "2d5caec5-b994-4660-8f37-60da1e20dc0a", 00:13:36.205 "strip_size_kb": 64, 00:13:36.205 "state": "online", 00:13:36.205 "raid_level": "concat", 00:13:36.205 "superblock": true, 00:13:36.205 "num_base_bdevs": 4, 00:13:36.205 "num_base_bdevs_discovered": 4, 00:13:36.205 "num_base_bdevs_operational": 4, 00:13:36.205 "base_bdevs_list": [ 00:13:36.205 { 00:13:36.205 "name": "BaseBdev1", 00:13:36.205 "uuid": "47371fcf-14d8-465b-8bf3-45b511ae25f1", 00:13:36.205 "is_configured": true, 00:13:36.205 "data_offset": 2048, 00:13:36.205 "data_size": 63488 00:13:36.205 }, 00:13:36.205 { 00:13:36.205 "name": "BaseBdev2", 00:13:36.205 "uuid": "7f682425-1f8e-4318-837a-eac840bfc8d8", 00:13:36.205 "is_configured": true, 00:13:36.205 "data_offset": 2048, 00:13:36.205 "data_size": 63488 00:13:36.205 }, 00:13:36.205 { 00:13:36.205 "name": "BaseBdev3", 00:13:36.205 "uuid": "b9b0e1d1-8dc6-478e-9333-37a55226b572", 00:13:36.205 "is_configured": true, 00:13:36.205 "data_offset": 2048, 00:13:36.205 "data_size": 63488 00:13:36.205 }, 00:13:36.205 { 00:13:36.205 "name": "BaseBdev4", 00:13:36.205 "uuid": "71a64d0f-645e-4763-85c5-f85fb21bf201", 00:13:36.205 "is_configured": true, 00:13:36.205 "data_offset": 2048, 00:13:36.205 "data_size": 63488 00:13:36.205 } 00:13:36.205 ] 00:13:36.205 }' 00:13:36.205 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.205 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.464 [2024-10-01 13:47:46.553729] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.464 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.464 "name": "Existed_Raid", 00:13:36.464 "aliases": [ 00:13:36.464 "2d5caec5-b994-4660-8f37-60da1e20dc0a" 00:13:36.464 ], 00:13:36.464 "product_name": "Raid Volume", 00:13:36.464 "block_size": 512, 00:13:36.464 "num_blocks": 253952, 00:13:36.464 "uuid": "2d5caec5-b994-4660-8f37-60da1e20dc0a", 00:13:36.464 "assigned_rate_limits": { 00:13:36.464 "rw_ios_per_sec": 0, 00:13:36.464 "rw_mbytes_per_sec": 0, 00:13:36.464 "r_mbytes_per_sec": 0, 00:13:36.464 "w_mbytes_per_sec": 0 00:13:36.464 }, 00:13:36.464 "claimed": false, 00:13:36.464 "zoned": false, 00:13:36.464 "supported_io_types": { 00:13:36.464 "read": true, 00:13:36.464 "write": true, 00:13:36.464 "unmap": true, 00:13:36.464 "flush": true, 00:13:36.464 "reset": true, 00:13:36.464 "nvme_admin": false, 00:13:36.464 "nvme_io": false, 00:13:36.464 "nvme_io_md": false, 00:13:36.464 "write_zeroes": true, 00:13:36.464 "zcopy": false, 00:13:36.464 "get_zone_info": false, 00:13:36.464 "zone_management": false, 00:13:36.464 "zone_append": false, 00:13:36.464 "compare": false, 00:13:36.464 "compare_and_write": false, 00:13:36.464 "abort": false, 00:13:36.464 "seek_hole": false, 00:13:36.464 "seek_data": false, 00:13:36.464 "copy": false, 00:13:36.464 "nvme_iov_md": false 00:13:36.464 }, 00:13:36.464 "memory_domains": [ 00:13:36.464 { 00:13:36.464 "dma_device_id": "system", 00:13:36.464 "dma_device_type": 1 00:13:36.464 }, 00:13:36.464 { 00:13:36.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.464 "dma_device_type": 2 00:13:36.464 }, 00:13:36.464 { 00:13:36.464 "dma_device_id": "system", 00:13:36.464 "dma_device_type": 1 00:13:36.464 }, 00:13:36.464 { 00:13:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.465 "dma_device_type": 2 00:13:36.465 }, 00:13:36.465 { 00:13:36.465 "dma_device_id": "system", 00:13:36.465 "dma_device_type": 1 00:13:36.465 }, 00:13:36.465 { 00:13:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.465 "dma_device_type": 2 00:13:36.465 }, 00:13:36.465 { 00:13:36.465 "dma_device_id": "system", 00:13:36.465 "dma_device_type": 1 00:13:36.465 }, 00:13:36.465 { 00:13:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.465 "dma_device_type": 2 00:13:36.465 } 00:13:36.465 ], 00:13:36.465 "driver_specific": { 00:13:36.465 "raid": { 00:13:36.465 "uuid": "2d5caec5-b994-4660-8f37-60da1e20dc0a", 00:13:36.465 "strip_size_kb": 64, 00:13:36.465 "state": "online", 00:13:36.465 "raid_level": "concat", 00:13:36.465 "superblock": true, 00:13:36.465 "num_base_bdevs": 4, 00:13:36.465 "num_base_bdevs_discovered": 4, 00:13:36.465 "num_base_bdevs_operational": 4, 00:13:36.465 "base_bdevs_list": [ 00:13:36.465 { 00:13:36.465 "name": "BaseBdev1", 00:13:36.465 "uuid": "47371fcf-14d8-465b-8bf3-45b511ae25f1", 00:13:36.465 "is_configured": true, 00:13:36.465 "data_offset": 2048, 00:13:36.465 "data_size": 63488 00:13:36.465 }, 00:13:36.465 { 00:13:36.465 "name": "BaseBdev2", 00:13:36.465 "uuid": "7f682425-1f8e-4318-837a-eac840bfc8d8", 00:13:36.465 "is_configured": true, 00:13:36.465 "data_offset": 2048, 00:13:36.465 "data_size": 63488 00:13:36.465 }, 00:13:36.465 { 00:13:36.465 "name": "BaseBdev3", 00:13:36.465 "uuid": "b9b0e1d1-8dc6-478e-9333-37a55226b572", 00:13:36.465 "is_configured": true, 00:13:36.465 "data_offset": 2048, 00:13:36.465 "data_size": 63488 00:13:36.465 }, 00:13:36.465 { 00:13:36.465 "name": "BaseBdev4", 00:13:36.465 "uuid": "71a64d0f-645e-4763-85c5-f85fb21bf201", 00:13:36.465 "is_configured": true, 00:13:36.465 "data_offset": 2048, 00:13:36.465 "data_size": 63488 00:13:36.465 } 00:13:36.465 ] 00:13:36.465 } 00:13:36.465 } 00:13:36.465 }' 00:13:36.465 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.465 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:36.465 BaseBdev2 00:13:36.465 BaseBdev3 00:13:36.465 BaseBdev4' 00:13:36.465 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.724 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.725 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.725 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.725 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.725 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.725 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:36.725 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.725 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.725 [2024-10-01 13:47:46.872966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.725 [2024-10-01 13:47:46.873112] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.725 [2024-10-01 13:47:46.873248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.982 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.983 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.983 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.983 13:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.983 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.983 13:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.983 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.983 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.983 "name": "Existed_Raid", 00:13:36.983 "uuid": "2d5caec5-b994-4660-8f37-60da1e20dc0a", 00:13:36.983 "strip_size_kb": 64, 00:13:36.983 "state": "offline", 00:13:36.983 "raid_level": "concat", 00:13:36.983 "superblock": true, 00:13:36.983 "num_base_bdevs": 4, 00:13:36.983 "num_base_bdevs_discovered": 3, 00:13:36.983 "num_base_bdevs_operational": 3, 00:13:36.983 "base_bdevs_list": [ 00:13:36.983 { 00:13:36.983 "name": null, 00:13:36.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.983 "is_configured": false, 00:13:36.983 "data_offset": 0, 00:13:36.983 "data_size": 63488 00:13:36.983 }, 00:13:36.983 { 00:13:36.983 "name": "BaseBdev2", 00:13:36.983 "uuid": "7f682425-1f8e-4318-837a-eac840bfc8d8", 00:13:36.983 "is_configured": true, 00:13:36.983 "data_offset": 2048, 00:13:36.983 "data_size": 63488 00:13:36.983 }, 00:13:36.983 { 00:13:36.983 "name": "BaseBdev3", 00:13:36.983 "uuid": "b9b0e1d1-8dc6-478e-9333-37a55226b572", 00:13:36.983 "is_configured": true, 00:13:36.983 "data_offset": 2048, 00:13:36.983 "data_size": 63488 00:13:36.983 }, 00:13:36.983 { 00:13:36.983 "name": "BaseBdev4", 00:13:36.983 "uuid": "71a64d0f-645e-4763-85c5-f85fb21bf201", 00:13:36.983 "is_configured": true, 00:13:36.983 "data_offset": 2048, 00:13:36.983 "data_size": 63488 00:13:36.983 } 00:13:36.983 ] 00:13:36.983 }' 00:13:36.983 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.983 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.269 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.270 [2024-10-01 13:47:47.415663] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.536 [2024-10-01 13:47:47.575981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.536 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 [2024-10-01 13:47:47.731028] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:37.820 [2024-10-01 13:47:47.731081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.820 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.821 BaseBdev2 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.821 [ 00:13:37.821 { 00:13:37.821 "name": "BaseBdev2", 00:13:37.821 "aliases": [ 00:13:37.821 "dda35933-912e-4038-9d22-0191f34ab819" 00:13:37.821 ], 00:13:37.821 "product_name": "Malloc disk", 00:13:37.821 "block_size": 512, 00:13:37.821 "num_blocks": 65536, 00:13:37.821 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:37.821 "assigned_rate_limits": { 00:13:37.821 "rw_ios_per_sec": 0, 00:13:37.821 "rw_mbytes_per_sec": 0, 00:13:37.821 "r_mbytes_per_sec": 0, 00:13:37.821 "w_mbytes_per_sec": 0 00:13:37.821 }, 00:13:37.821 "claimed": false, 00:13:37.821 "zoned": false, 00:13:37.821 "supported_io_types": { 00:13:37.821 "read": true, 00:13:37.821 "write": true, 00:13:37.821 "unmap": true, 00:13:37.821 "flush": true, 00:13:37.821 "reset": true, 00:13:37.821 "nvme_admin": false, 00:13:37.821 "nvme_io": false, 00:13:37.821 "nvme_io_md": false, 00:13:37.821 "write_zeroes": true, 00:13:37.821 "zcopy": true, 00:13:37.821 "get_zone_info": false, 00:13:37.821 "zone_management": false, 00:13:37.821 "zone_append": false, 00:13:37.821 "compare": false, 00:13:37.821 "compare_and_write": false, 00:13:37.821 "abort": true, 00:13:37.821 "seek_hole": false, 00:13:37.821 "seek_data": false, 00:13:37.821 "copy": true, 00:13:37.821 "nvme_iov_md": false 00:13:37.821 }, 00:13:37.821 "memory_domains": [ 00:13:37.821 { 00:13:37.821 "dma_device_id": "system", 00:13:37.821 "dma_device_type": 1 00:13:37.821 }, 00:13:37.821 { 00:13:37.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.821 "dma_device_type": 2 00:13:37.821 } 00:13:37.821 ], 00:13:37.821 "driver_specific": {} 00:13:37.821 } 00:13:37.821 ] 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.821 13:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.080 BaseBdev3 00:13:38.080 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.080 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:38.080 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:38.080 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.080 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:38.080 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 [ 00:13:38.081 { 00:13:38.081 "name": "BaseBdev3", 00:13:38.081 "aliases": [ 00:13:38.081 "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5" 00:13:38.081 ], 00:13:38.081 "product_name": "Malloc disk", 00:13:38.081 "block_size": 512, 00:13:38.081 "num_blocks": 65536, 00:13:38.081 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:38.081 "assigned_rate_limits": { 00:13:38.081 "rw_ios_per_sec": 0, 00:13:38.081 "rw_mbytes_per_sec": 0, 00:13:38.081 "r_mbytes_per_sec": 0, 00:13:38.081 "w_mbytes_per_sec": 0 00:13:38.081 }, 00:13:38.081 "claimed": false, 00:13:38.081 "zoned": false, 00:13:38.081 "supported_io_types": { 00:13:38.081 "read": true, 00:13:38.081 "write": true, 00:13:38.081 "unmap": true, 00:13:38.081 "flush": true, 00:13:38.081 "reset": true, 00:13:38.081 "nvme_admin": false, 00:13:38.081 "nvme_io": false, 00:13:38.081 "nvme_io_md": false, 00:13:38.081 "write_zeroes": true, 00:13:38.081 "zcopy": true, 00:13:38.081 "get_zone_info": false, 00:13:38.081 "zone_management": false, 00:13:38.081 "zone_append": false, 00:13:38.081 "compare": false, 00:13:38.081 "compare_and_write": false, 00:13:38.081 "abort": true, 00:13:38.081 "seek_hole": false, 00:13:38.081 "seek_data": false, 00:13:38.081 "copy": true, 00:13:38.081 "nvme_iov_md": false 00:13:38.081 }, 00:13:38.081 "memory_domains": [ 00:13:38.081 { 00:13:38.081 "dma_device_id": "system", 00:13:38.081 "dma_device_type": 1 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.081 "dma_device_type": 2 00:13:38.081 } 00:13:38.081 ], 00:13:38.081 "driver_specific": {} 00:13:38.081 } 00:13:38.081 ] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 BaseBdev4 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 [ 00:13:38.081 { 00:13:38.081 "name": "BaseBdev4", 00:13:38.081 "aliases": [ 00:13:38.081 "2a70cdb4-4370-4700-974f-5e258ef816f2" 00:13:38.081 ], 00:13:38.081 "product_name": "Malloc disk", 00:13:38.081 "block_size": 512, 00:13:38.081 "num_blocks": 65536, 00:13:38.081 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:38.081 "assigned_rate_limits": { 00:13:38.081 "rw_ios_per_sec": 0, 00:13:38.081 "rw_mbytes_per_sec": 0, 00:13:38.081 "r_mbytes_per_sec": 0, 00:13:38.081 "w_mbytes_per_sec": 0 00:13:38.081 }, 00:13:38.081 "claimed": false, 00:13:38.081 "zoned": false, 00:13:38.081 "supported_io_types": { 00:13:38.081 "read": true, 00:13:38.081 "write": true, 00:13:38.081 "unmap": true, 00:13:38.081 "flush": true, 00:13:38.081 "reset": true, 00:13:38.081 "nvme_admin": false, 00:13:38.081 "nvme_io": false, 00:13:38.081 "nvme_io_md": false, 00:13:38.081 "write_zeroes": true, 00:13:38.081 "zcopy": true, 00:13:38.081 "get_zone_info": false, 00:13:38.081 "zone_management": false, 00:13:38.081 "zone_append": false, 00:13:38.081 "compare": false, 00:13:38.081 "compare_and_write": false, 00:13:38.081 "abort": true, 00:13:38.081 "seek_hole": false, 00:13:38.081 "seek_data": false, 00:13:38.081 "copy": true, 00:13:38.081 "nvme_iov_md": false 00:13:38.081 }, 00:13:38.081 "memory_domains": [ 00:13:38.081 { 00:13:38.081 "dma_device_id": "system", 00:13:38.081 "dma_device_type": 1 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.081 "dma_device_type": 2 00:13:38.081 } 00:13:38.081 ], 00:13:38.081 "driver_specific": {} 00:13:38.081 } 00:13:38.081 ] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 [2024-10-01 13:47:48.167068] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.081 [2024-10-01 13:47:48.167120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.081 [2024-10-01 13:47:48.167147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.081 [2024-10-01 13:47:48.169496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.081 [2024-10-01 13:47:48.169551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.081 "name": "Existed_Raid", 00:13:38.081 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:38.081 "strip_size_kb": 64, 00:13:38.081 "state": "configuring", 00:13:38.081 "raid_level": "concat", 00:13:38.081 "superblock": true, 00:13:38.081 "num_base_bdevs": 4, 00:13:38.081 "num_base_bdevs_discovered": 3, 00:13:38.081 "num_base_bdevs_operational": 4, 00:13:38.081 "base_bdevs_list": [ 00:13:38.081 { 00:13:38.081 "name": "BaseBdev1", 00:13:38.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.081 "is_configured": false, 00:13:38.081 "data_offset": 0, 00:13:38.081 "data_size": 0 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "name": "BaseBdev2", 00:13:38.081 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:38.081 "is_configured": true, 00:13:38.081 "data_offset": 2048, 00:13:38.081 "data_size": 63488 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "name": "BaseBdev3", 00:13:38.081 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:38.081 "is_configured": true, 00:13:38.081 "data_offset": 2048, 00:13:38.081 "data_size": 63488 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "name": "BaseBdev4", 00:13:38.081 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:38.081 "is_configured": true, 00:13:38.081 "data_offset": 2048, 00:13:38.081 "data_size": 63488 00:13:38.081 } 00:13:38.081 ] 00:13:38.081 }' 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.081 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 [2024-10-01 13:47:48.606519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.679 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.680 "name": "Existed_Raid", 00:13:38.680 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:38.680 "strip_size_kb": 64, 00:13:38.680 "state": "configuring", 00:13:38.680 "raid_level": "concat", 00:13:38.680 "superblock": true, 00:13:38.680 "num_base_bdevs": 4, 00:13:38.680 "num_base_bdevs_discovered": 2, 00:13:38.680 "num_base_bdevs_operational": 4, 00:13:38.680 "base_bdevs_list": [ 00:13:38.680 { 00:13:38.680 "name": "BaseBdev1", 00:13:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.680 "is_configured": false, 00:13:38.680 "data_offset": 0, 00:13:38.680 "data_size": 0 00:13:38.680 }, 00:13:38.680 { 00:13:38.680 "name": null, 00:13:38.680 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:38.680 "is_configured": false, 00:13:38.680 "data_offset": 0, 00:13:38.680 "data_size": 63488 00:13:38.680 }, 00:13:38.680 { 00:13:38.680 "name": "BaseBdev3", 00:13:38.680 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:38.680 "is_configured": true, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 }, 00:13:38.680 { 00:13:38.680 "name": "BaseBdev4", 00:13:38.680 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:38.680 "is_configured": true, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 } 00:13:38.680 ] 00:13:38.680 }' 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.680 13:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.940 [2024-10-01 13:47:49.126238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.940 BaseBdev1 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.940 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.200 [ 00:13:39.200 { 00:13:39.200 "name": "BaseBdev1", 00:13:39.200 "aliases": [ 00:13:39.200 "b925e0b4-f82d-43c3-9ab5-20844154c0cf" 00:13:39.200 ], 00:13:39.200 "product_name": "Malloc disk", 00:13:39.200 "block_size": 512, 00:13:39.200 "num_blocks": 65536, 00:13:39.200 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:39.200 "assigned_rate_limits": { 00:13:39.200 "rw_ios_per_sec": 0, 00:13:39.200 "rw_mbytes_per_sec": 0, 00:13:39.200 "r_mbytes_per_sec": 0, 00:13:39.200 "w_mbytes_per_sec": 0 00:13:39.200 }, 00:13:39.200 "claimed": true, 00:13:39.200 "claim_type": "exclusive_write", 00:13:39.200 "zoned": false, 00:13:39.200 "supported_io_types": { 00:13:39.200 "read": true, 00:13:39.200 "write": true, 00:13:39.200 "unmap": true, 00:13:39.200 "flush": true, 00:13:39.200 "reset": true, 00:13:39.200 "nvme_admin": false, 00:13:39.200 "nvme_io": false, 00:13:39.200 "nvme_io_md": false, 00:13:39.200 "write_zeroes": true, 00:13:39.200 "zcopy": true, 00:13:39.200 "get_zone_info": false, 00:13:39.200 "zone_management": false, 00:13:39.200 "zone_append": false, 00:13:39.200 "compare": false, 00:13:39.200 "compare_and_write": false, 00:13:39.200 "abort": true, 00:13:39.200 "seek_hole": false, 00:13:39.200 "seek_data": false, 00:13:39.200 "copy": true, 00:13:39.200 "nvme_iov_md": false 00:13:39.200 }, 00:13:39.200 "memory_domains": [ 00:13:39.200 { 00:13:39.200 "dma_device_id": "system", 00:13:39.200 "dma_device_type": 1 00:13:39.200 }, 00:13:39.200 { 00:13:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.200 "dma_device_type": 2 00:13:39.200 } 00:13:39.200 ], 00:13:39.200 "driver_specific": {} 00:13:39.200 } 00:13:39.200 ] 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.200 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.200 "name": "Existed_Raid", 00:13:39.200 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:39.200 "strip_size_kb": 64, 00:13:39.200 "state": "configuring", 00:13:39.200 "raid_level": "concat", 00:13:39.200 "superblock": true, 00:13:39.200 "num_base_bdevs": 4, 00:13:39.200 "num_base_bdevs_discovered": 3, 00:13:39.200 "num_base_bdevs_operational": 4, 00:13:39.200 "base_bdevs_list": [ 00:13:39.200 { 00:13:39.200 "name": "BaseBdev1", 00:13:39.200 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:39.200 "is_configured": true, 00:13:39.200 "data_offset": 2048, 00:13:39.200 "data_size": 63488 00:13:39.200 }, 00:13:39.200 { 00:13:39.200 "name": null, 00:13:39.200 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:39.200 "is_configured": false, 00:13:39.200 "data_offset": 0, 00:13:39.201 "data_size": 63488 00:13:39.201 }, 00:13:39.201 { 00:13:39.201 "name": "BaseBdev3", 00:13:39.201 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:39.201 "is_configured": true, 00:13:39.201 "data_offset": 2048, 00:13:39.201 "data_size": 63488 00:13:39.201 }, 00:13:39.201 { 00:13:39.201 "name": "BaseBdev4", 00:13:39.201 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:39.201 "is_configured": true, 00:13:39.201 "data_offset": 2048, 00:13:39.201 "data_size": 63488 00:13:39.201 } 00:13:39.201 ] 00:13:39.201 }' 00:13:39.201 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.201 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.461 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.461 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.461 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.461 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.461 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.721 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:39.721 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:39.721 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.721 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.721 [2024-10-01 13:47:49.669591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:39.721 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.721 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.722 "name": "Existed_Raid", 00:13:39.722 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:39.722 "strip_size_kb": 64, 00:13:39.722 "state": "configuring", 00:13:39.722 "raid_level": "concat", 00:13:39.722 "superblock": true, 00:13:39.722 "num_base_bdevs": 4, 00:13:39.722 "num_base_bdevs_discovered": 2, 00:13:39.722 "num_base_bdevs_operational": 4, 00:13:39.722 "base_bdevs_list": [ 00:13:39.722 { 00:13:39.722 "name": "BaseBdev1", 00:13:39.722 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:39.722 "is_configured": true, 00:13:39.722 "data_offset": 2048, 00:13:39.722 "data_size": 63488 00:13:39.722 }, 00:13:39.722 { 00:13:39.722 "name": null, 00:13:39.722 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:39.722 "is_configured": false, 00:13:39.722 "data_offset": 0, 00:13:39.722 "data_size": 63488 00:13:39.722 }, 00:13:39.722 { 00:13:39.722 "name": null, 00:13:39.722 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:39.722 "is_configured": false, 00:13:39.722 "data_offset": 0, 00:13:39.722 "data_size": 63488 00:13:39.722 }, 00:13:39.722 { 00:13:39.722 "name": "BaseBdev4", 00:13:39.722 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:39.722 "is_configured": true, 00:13:39.722 "data_offset": 2048, 00:13:39.722 "data_size": 63488 00:13:39.722 } 00:13:39.722 ] 00:13:39.722 }' 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.722 13:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.981 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.981 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.981 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.981 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:39.981 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.240 [2024-10-01 13:47:50.192842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.240 "name": "Existed_Raid", 00:13:40.240 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:40.240 "strip_size_kb": 64, 00:13:40.240 "state": "configuring", 00:13:40.240 "raid_level": "concat", 00:13:40.240 "superblock": true, 00:13:40.240 "num_base_bdevs": 4, 00:13:40.240 "num_base_bdevs_discovered": 3, 00:13:40.240 "num_base_bdevs_operational": 4, 00:13:40.240 "base_bdevs_list": [ 00:13:40.240 { 00:13:40.240 "name": "BaseBdev1", 00:13:40.240 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:40.240 "is_configured": true, 00:13:40.240 "data_offset": 2048, 00:13:40.240 "data_size": 63488 00:13:40.240 }, 00:13:40.240 { 00:13:40.240 "name": null, 00:13:40.240 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:40.240 "is_configured": false, 00:13:40.240 "data_offset": 0, 00:13:40.240 "data_size": 63488 00:13:40.240 }, 00:13:40.240 { 00:13:40.240 "name": "BaseBdev3", 00:13:40.240 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:40.240 "is_configured": true, 00:13:40.240 "data_offset": 2048, 00:13:40.240 "data_size": 63488 00:13:40.240 }, 00:13:40.240 { 00:13:40.240 "name": "BaseBdev4", 00:13:40.240 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:40.240 "is_configured": true, 00:13:40.240 "data_offset": 2048, 00:13:40.240 "data_size": 63488 00:13:40.240 } 00:13:40.240 ] 00:13:40.240 }' 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.240 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.499 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.499 [2024-10-01 13:47:50.660226] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.758 "name": "Existed_Raid", 00:13:40.758 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:40.758 "strip_size_kb": 64, 00:13:40.758 "state": "configuring", 00:13:40.758 "raid_level": "concat", 00:13:40.758 "superblock": true, 00:13:40.758 "num_base_bdevs": 4, 00:13:40.758 "num_base_bdevs_discovered": 2, 00:13:40.758 "num_base_bdevs_operational": 4, 00:13:40.758 "base_bdevs_list": [ 00:13:40.758 { 00:13:40.758 "name": null, 00:13:40.758 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:40.758 "is_configured": false, 00:13:40.758 "data_offset": 0, 00:13:40.758 "data_size": 63488 00:13:40.758 }, 00:13:40.758 { 00:13:40.758 "name": null, 00:13:40.758 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:40.758 "is_configured": false, 00:13:40.758 "data_offset": 0, 00:13:40.758 "data_size": 63488 00:13:40.758 }, 00:13:40.758 { 00:13:40.758 "name": "BaseBdev3", 00:13:40.758 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:40.758 "is_configured": true, 00:13:40.758 "data_offset": 2048, 00:13:40.758 "data_size": 63488 00:13:40.758 }, 00:13:40.758 { 00:13:40.758 "name": "BaseBdev4", 00:13:40.758 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:40.758 "is_configured": true, 00:13:40.758 "data_offset": 2048, 00:13:40.758 "data_size": 63488 00:13:40.758 } 00:13:40.758 ] 00:13:40.758 }' 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.758 13:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.017 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.017 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:41.017 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.017 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.276 [2024-10-01 13:47:51.236064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.276 "name": "Existed_Raid", 00:13:41.276 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:41.276 "strip_size_kb": 64, 00:13:41.276 "state": "configuring", 00:13:41.276 "raid_level": "concat", 00:13:41.276 "superblock": true, 00:13:41.276 "num_base_bdevs": 4, 00:13:41.276 "num_base_bdevs_discovered": 3, 00:13:41.276 "num_base_bdevs_operational": 4, 00:13:41.276 "base_bdevs_list": [ 00:13:41.276 { 00:13:41.276 "name": null, 00:13:41.276 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:41.276 "is_configured": false, 00:13:41.276 "data_offset": 0, 00:13:41.276 "data_size": 63488 00:13:41.276 }, 00:13:41.276 { 00:13:41.276 "name": "BaseBdev2", 00:13:41.276 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:41.276 "is_configured": true, 00:13:41.276 "data_offset": 2048, 00:13:41.276 "data_size": 63488 00:13:41.276 }, 00:13:41.276 { 00:13:41.276 "name": "BaseBdev3", 00:13:41.276 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:41.276 "is_configured": true, 00:13:41.276 "data_offset": 2048, 00:13:41.276 "data_size": 63488 00:13:41.276 }, 00:13:41.276 { 00:13:41.276 "name": "BaseBdev4", 00:13:41.276 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:41.276 "is_configured": true, 00:13:41.276 "data_offset": 2048, 00:13:41.276 "data_size": 63488 00:13:41.276 } 00:13:41.276 ] 00:13:41.276 }' 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.276 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b925e0b4-f82d-43c3-9ab5-20844154c0cf 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.794 [2024-10-01 13:47:51.799847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:41.794 [2024-10-01 13:47:51.800098] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:41.794 [2024-10-01 13:47:51.800112] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:41.794 [2024-10-01 13:47:51.800390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:41.794 NewBaseBdev 00:13:41.794 [2024-10-01 13:47:51.800550] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:41.794 [2024-10-01 13:47:51.800565] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:41.794 [2024-10-01 13:47:51.800706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:41.794 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.795 [ 00:13:41.795 { 00:13:41.795 "name": "NewBaseBdev", 00:13:41.795 "aliases": [ 00:13:41.795 "b925e0b4-f82d-43c3-9ab5-20844154c0cf" 00:13:41.795 ], 00:13:41.795 "product_name": "Malloc disk", 00:13:41.795 "block_size": 512, 00:13:41.795 "num_blocks": 65536, 00:13:41.795 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:41.795 "assigned_rate_limits": { 00:13:41.795 "rw_ios_per_sec": 0, 00:13:41.795 "rw_mbytes_per_sec": 0, 00:13:41.795 "r_mbytes_per_sec": 0, 00:13:41.795 "w_mbytes_per_sec": 0 00:13:41.795 }, 00:13:41.795 "claimed": true, 00:13:41.795 "claim_type": "exclusive_write", 00:13:41.795 "zoned": false, 00:13:41.795 "supported_io_types": { 00:13:41.795 "read": true, 00:13:41.795 "write": true, 00:13:41.795 "unmap": true, 00:13:41.795 "flush": true, 00:13:41.795 "reset": true, 00:13:41.795 "nvme_admin": false, 00:13:41.795 "nvme_io": false, 00:13:41.795 "nvme_io_md": false, 00:13:41.795 "write_zeroes": true, 00:13:41.795 "zcopy": true, 00:13:41.795 "get_zone_info": false, 00:13:41.795 "zone_management": false, 00:13:41.795 "zone_append": false, 00:13:41.795 "compare": false, 00:13:41.795 "compare_and_write": false, 00:13:41.795 "abort": true, 00:13:41.795 "seek_hole": false, 00:13:41.795 "seek_data": false, 00:13:41.795 "copy": true, 00:13:41.795 "nvme_iov_md": false 00:13:41.795 }, 00:13:41.795 "memory_domains": [ 00:13:41.795 { 00:13:41.795 "dma_device_id": "system", 00:13:41.795 "dma_device_type": 1 00:13:41.795 }, 00:13:41.795 { 00:13:41.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.795 "dma_device_type": 2 00:13:41.795 } 00:13:41.795 ], 00:13:41.795 "driver_specific": {} 00:13:41.795 } 00:13:41.795 ] 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.795 "name": "Existed_Raid", 00:13:41.795 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:41.795 "strip_size_kb": 64, 00:13:41.795 "state": "online", 00:13:41.795 "raid_level": "concat", 00:13:41.795 "superblock": true, 00:13:41.795 "num_base_bdevs": 4, 00:13:41.795 "num_base_bdevs_discovered": 4, 00:13:41.795 "num_base_bdevs_operational": 4, 00:13:41.795 "base_bdevs_list": [ 00:13:41.795 { 00:13:41.795 "name": "NewBaseBdev", 00:13:41.795 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:41.795 "is_configured": true, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 }, 00:13:41.795 { 00:13:41.795 "name": "BaseBdev2", 00:13:41.795 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:41.795 "is_configured": true, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 }, 00:13:41.795 { 00:13:41.795 "name": "BaseBdev3", 00:13:41.795 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:41.795 "is_configured": true, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 }, 00:13:41.795 { 00:13:41.795 "name": "BaseBdev4", 00:13:41.795 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:41.795 "is_configured": true, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 } 00:13:41.795 ] 00:13:41.795 }' 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.795 13:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.384 [2024-10-01 13:47:52.303944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.384 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.384 "name": "Existed_Raid", 00:13:42.384 "aliases": [ 00:13:42.384 "82b8b828-e67c-452e-aa63-4ff95552e203" 00:13:42.384 ], 00:13:42.384 "product_name": "Raid Volume", 00:13:42.384 "block_size": 512, 00:13:42.384 "num_blocks": 253952, 00:13:42.384 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:42.384 "assigned_rate_limits": { 00:13:42.384 "rw_ios_per_sec": 0, 00:13:42.384 "rw_mbytes_per_sec": 0, 00:13:42.384 "r_mbytes_per_sec": 0, 00:13:42.384 "w_mbytes_per_sec": 0 00:13:42.384 }, 00:13:42.384 "claimed": false, 00:13:42.384 "zoned": false, 00:13:42.384 "supported_io_types": { 00:13:42.384 "read": true, 00:13:42.384 "write": true, 00:13:42.384 "unmap": true, 00:13:42.384 "flush": true, 00:13:42.384 "reset": true, 00:13:42.384 "nvme_admin": false, 00:13:42.384 "nvme_io": false, 00:13:42.384 "nvme_io_md": false, 00:13:42.384 "write_zeroes": true, 00:13:42.384 "zcopy": false, 00:13:42.384 "get_zone_info": false, 00:13:42.384 "zone_management": false, 00:13:42.384 "zone_append": false, 00:13:42.384 "compare": false, 00:13:42.384 "compare_and_write": false, 00:13:42.384 "abort": false, 00:13:42.384 "seek_hole": false, 00:13:42.384 "seek_data": false, 00:13:42.384 "copy": false, 00:13:42.384 "nvme_iov_md": false 00:13:42.384 }, 00:13:42.384 "memory_domains": [ 00:13:42.384 { 00:13:42.384 "dma_device_id": "system", 00:13:42.384 "dma_device_type": 1 00:13:42.384 }, 00:13:42.384 { 00:13:42.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.384 "dma_device_type": 2 00:13:42.384 }, 00:13:42.384 { 00:13:42.384 "dma_device_id": "system", 00:13:42.384 "dma_device_type": 1 00:13:42.384 }, 00:13:42.385 { 00:13:42.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.385 "dma_device_type": 2 00:13:42.385 }, 00:13:42.385 { 00:13:42.385 "dma_device_id": "system", 00:13:42.385 "dma_device_type": 1 00:13:42.385 }, 00:13:42.385 { 00:13:42.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.385 "dma_device_type": 2 00:13:42.385 }, 00:13:42.385 { 00:13:42.385 "dma_device_id": "system", 00:13:42.385 "dma_device_type": 1 00:13:42.385 }, 00:13:42.385 { 00:13:42.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.385 "dma_device_type": 2 00:13:42.385 } 00:13:42.385 ], 00:13:42.385 "driver_specific": { 00:13:42.385 "raid": { 00:13:42.385 "uuid": "82b8b828-e67c-452e-aa63-4ff95552e203", 00:13:42.385 "strip_size_kb": 64, 00:13:42.385 "state": "online", 00:13:42.385 "raid_level": "concat", 00:13:42.385 "superblock": true, 00:13:42.385 "num_base_bdevs": 4, 00:13:42.385 "num_base_bdevs_discovered": 4, 00:13:42.385 "num_base_bdevs_operational": 4, 00:13:42.385 "base_bdevs_list": [ 00:13:42.385 { 00:13:42.385 "name": "NewBaseBdev", 00:13:42.385 "uuid": "b925e0b4-f82d-43c3-9ab5-20844154c0cf", 00:13:42.385 "is_configured": true, 00:13:42.385 "data_offset": 2048, 00:13:42.385 "data_size": 63488 00:13:42.385 }, 00:13:42.385 { 00:13:42.385 "name": "BaseBdev2", 00:13:42.385 "uuid": "dda35933-912e-4038-9d22-0191f34ab819", 00:13:42.385 "is_configured": true, 00:13:42.385 "data_offset": 2048, 00:13:42.385 "data_size": 63488 00:13:42.385 }, 00:13:42.385 { 00:13:42.385 "name": "BaseBdev3", 00:13:42.385 "uuid": "6b91b22f-0d27-4b31-9bbc-42f8d5bce0e5", 00:13:42.385 "is_configured": true, 00:13:42.385 "data_offset": 2048, 00:13:42.385 "data_size": 63488 00:13:42.385 }, 00:13:42.385 { 00:13:42.385 "name": "BaseBdev4", 00:13:42.385 "uuid": "2a70cdb4-4370-4700-974f-5e258ef816f2", 00:13:42.385 "is_configured": true, 00:13:42.385 "data_offset": 2048, 00:13:42.385 "data_size": 63488 00:13:42.385 } 00:13:42.385 ] 00:13:42.385 } 00:13:42.385 } 00:13:42.385 }' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:42.385 BaseBdev2 00:13:42.385 BaseBdev3 00:13:42.385 BaseBdev4' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.385 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.642 [2024-10-01 13:47:52.639590] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.642 [2024-10-01 13:47:52.639624] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.642 [2024-10-01 13:47:52.639721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.642 [2024-10-01 13:47:52.639809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.642 [2024-10-01 13:47:52.639823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71846 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71846 ']' 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71846 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71846 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.642 killing process with pid 71846 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.642 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71846' 00:13:42.643 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71846 00:13:42.643 [2024-10-01 13:47:52.691282] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.643 13:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71846 00:13:43.208 [2024-10-01 13:47:53.125368] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.584 13:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:44.584 ************************************ 00:13:44.584 END TEST raid_state_function_test_sb 00:13:44.584 ************************************ 00:13:44.584 00:13:44.584 real 0m11.852s 00:13:44.584 user 0m18.597s 00:13:44.584 sys 0m2.271s 00:13:44.584 13:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.584 13:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.584 13:47:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:44.584 13:47:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:44.584 13:47:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.584 13:47:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.584 ************************************ 00:13:44.584 START TEST raid_superblock_test 00:13:44.584 ************************************ 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72523 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72523 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72523 ']' 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.584 13:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.584 [2024-10-01 13:47:54.668319] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:44.584 [2024-10-01 13:47:54.668545] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72523 ] 00:13:44.843 [2024-10-01 13:47:54.849861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.102 [2024-10-01 13:47:55.071844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.102 [2024-10-01 13:47:55.291868] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.360 [2024-10-01 13:47:55.292115] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.620 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 malloc1 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 [2024-10-01 13:47:55.604468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:45.621 [2024-10-01 13:47:55.604672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.621 [2024-10-01 13:47:55.604738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:45.621 [2024-10-01 13:47:55.604839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.621 [2024-10-01 13:47:55.607383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.621 [2024-10-01 13:47:55.607583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:45.621 pt1 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 malloc2 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 [2024-10-01 13:47:55.680420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:45.621 [2024-10-01 13:47:55.680617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.621 [2024-10-01 13:47:55.680697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:45.621 [2024-10-01 13:47:55.680773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.621 [2024-10-01 13:47:55.683262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.621 [2024-10-01 13:47:55.683417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:45.621 pt2 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 malloc3 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 [2024-10-01 13:47:55.739413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:45.621 [2024-10-01 13:47:55.739650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.621 [2024-10-01 13:47:55.739720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:45.621 [2024-10-01 13:47:55.739799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.621 [2024-10-01 13:47:55.742214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.621 [2024-10-01 13:47:55.742352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:45.621 pt3 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 malloc4 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.621 [2024-10-01 13:47:55.797843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:45.621 [2024-10-01 13:47:55.797906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.621 [2024-10-01 13:47:55.797941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:45.621 [2024-10-01 13:47:55.797952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.621 [2024-10-01 13:47:55.800530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.621 [2024-10-01 13:47:55.800583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:45.621 pt4 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.621 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:45.622 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.622 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.881 [2024-10-01 13:47:55.813891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:45.881 [2024-10-01 13:47:55.816289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:45.881 [2024-10-01 13:47:55.816500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:45.881 [2024-10-01 13:47:55.816612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:45.881 [2024-10-01 13:47:55.816892] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:45.881 [2024-10-01 13:47:55.817088] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:45.881 [2024-10-01 13:47:55.817457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:45.881 [2024-10-01 13:47:55.817671] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:45.881 [2024-10-01 13:47:55.817718] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:45.881 [2024-10-01 13:47:55.818016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.881 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.881 "name": "raid_bdev1", 00:13:45.881 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:45.881 "strip_size_kb": 64, 00:13:45.881 "state": "online", 00:13:45.881 "raid_level": "concat", 00:13:45.881 "superblock": true, 00:13:45.881 "num_base_bdevs": 4, 00:13:45.881 "num_base_bdevs_discovered": 4, 00:13:45.881 "num_base_bdevs_operational": 4, 00:13:45.881 "base_bdevs_list": [ 00:13:45.881 { 00:13:45.881 "name": "pt1", 00:13:45.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:45.881 "is_configured": true, 00:13:45.881 "data_offset": 2048, 00:13:45.881 "data_size": 63488 00:13:45.881 }, 00:13:45.881 { 00:13:45.881 "name": "pt2", 00:13:45.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.881 "is_configured": true, 00:13:45.881 "data_offset": 2048, 00:13:45.881 "data_size": 63488 00:13:45.881 }, 00:13:45.881 { 00:13:45.881 "name": "pt3", 00:13:45.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.881 "is_configured": true, 00:13:45.881 "data_offset": 2048, 00:13:45.881 "data_size": 63488 00:13:45.881 }, 00:13:45.881 { 00:13:45.881 "name": "pt4", 00:13:45.881 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:45.881 "is_configured": true, 00:13:45.881 "data_offset": 2048, 00:13:45.882 "data_size": 63488 00:13:45.882 } 00:13:45.882 ] 00:13:45.882 }' 00:13:45.882 13:47:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.882 13:47:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.141 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.141 [2024-10-01 13:47:56.301702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.401 "name": "raid_bdev1", 00:13:46.401 "aliases": [ 00:13:46.401 "5340c3f1-8ae7-4676-8bd4-804068d51043" 00:13:46.401 ], 00:13:46.401 "product_name": "Raid Volume", 00:13:46.401 "block_size": 512, 00:13:46.401 "num_blocks": 253952, 00:13:46.401 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:46.401 "assigned_rate_limits": { 00:13:46.401 "rw_ios_per_sec": 0, 00:13:46.401 "rw_mbytes_per_sec": 0, 00:13:46.401 "r_mbytes_per_sec": 0, 00:13:46.401 "w_mbytes_per_sec": 0 00:13:46.401 }, 00:13:46.401 "claimed": false, 00:13:46.401 "zoned": false, 00:13:46.401 "supported_io_types": { 00:13:46.401 "read": true, 00:13:46.401 "write": true, 00:13:46.401 "unmap": true, 00:13:46.401 "flush": true, 00:13:46.401 "reset": true, 00:13:46.401 "nvme_admin": false, 00:13:46.401 "nvme_io": false, 00:13:46.401 "nvme_io_md": false, 00:13:46.401 "write_zeroes": true, 00:13:46.401 "zcopy": false, 00:13:46.401 "get_zone_info": false, 00:13:46.401 "zone_management": false, 00:13:46.401 "zone_append": false, 00:13:46.401 "compare": false, 00:13:46.401 "compare_and_write": false, 00:13:46.401 "abort": false, 00:13:46.401 "seek_hole": false, 00:13:46.401 "seek_data": false, 00:13:46.401 "copy": false, 00:13:46.401 "nvme_iov_md": false 00:13:46.401 }, 00:13:46.401 "memory_domains": [ 00:13:46.401 { 00:13:46.401 "dma_device_id": "system", 00:13:46.401 "dma_device_type": 1 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.401 "dma_device_type": 2 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "dma_device_id": "system", 00:13:46.401 "dma_device_type": 1 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.401 "dma_device_type": 2 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "dma_device_id": "system", 00:13:46.401 "dma_device_type": 1 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.401 "dma_device_type": 2 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "dma_device_id": "system", 00:13:46.401 "dma_device_type": 1 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.401 "dma_device_type": 2 00:13:46.401 } 00:13:46.401 ], 00:13:46.401 "driver_specific": { 00:13:46.401 "raid": { 00:13:46.401 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:46.401 "strip_size_kb": 64, 00:13:46.401 "state": "online", 00:13:46.401 "raid_level": "concat", 00:13:46.401 "superblock": true, 00:13:46.401 "num_base_bdevs": 4, 00:13:46.401 "num_base_bdevs_discovered": 4, 00:13:46.401 "num_base_bdevs_operational": 4, 00:13:46.401 "base_bdevs_list": [ 00:13:46.401 { 00:13:46.401 "name": "pt1", 00:13:46.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.401 "is_configured": true, 00:13:46.401 "data_offset": 2048, 00:13:46.401 "data_size": 63488 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "name": "pt2", 00:13:46.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.401 "is_configured": true, 00:13:46.401 "data_offset": 2048, 00:13:46.401 "data_size": 63488 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "name": "pt3", 00:13:46.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.401 "is_configured": true, 00:13:46.401 "data_offset": 2048, 00:13:46.401 "data_size": 63488 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "name": "pt4", 00:13:46.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.401 "is_configured": true, 00:13:46.401 "data_offset": 2048, 00:13:46.401 "data_size": 63488 00:13:46.401 } 00:13:46.401 ] 00:13:46.401 } 00:13:46.401 } 00:13:46.401 }' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:46.401 pt2 00:13:46.401 pt3 00:13:46.401 pt4' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.401 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:46.660 [2024-10-01 13:47:56.633171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5340c3f1-8ae7-4676-8bd4-804068d51043 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5340c3f1-8ae7-4676-8bd4-804068d51043 ']' 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 [2024-10-01 13:47:56.684785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.660 [2024-10-01 13:47:56.684817] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.660 [2024-10-01 13:47:56.684897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.660 [2024-10-01 13:47:56.684966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.660 [2024-10-01 13:47:56.684987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.660 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.661 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.978 [2024-10-01 13:47:56.852636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:46.978 [2024-10-01 13:47:56.854849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:46.978 [2024-10-01 13:47:56.854895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:46.978 [2024-10-01 13:47:56.854931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:46.978 [2024-10-01 13:47:56.854983] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:46.978 [2024-10-01 13:47:56.855042] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:46.978 [2024-10-01 13:47:56.855081] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:46.978 [2024-10-01 13:47:56.855104] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:46.978 [2024-10-01 13:47:56.855121] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.978 [2024-10-01 13:47:56.855135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:46.978 request: 00:13:46.978 { 00:13:46.978 "name": "raid_bdev1", 00:13:46.978 "raid_level": "concat", 00:13:46.978 "base_bdevs": [ 00:13:46.978 "malloc1", 00:13:46.978 "malloc2", 00:13:46.978 "malloc3", 00:13:46.978 "malloc4" 00:13:46.978 ], 00:13:46.978 "strip_size_kb": 64, 00:13:46.978 "superblock": false, 00:13:46.978 "method": "bdev_raid_create", 00:13:46.978 "req_id": 1 00:13:46.978 } 00:13:46.978 Got JSON-RPC error response 00:13:46.978 response: 00:13:46.978 { 00:13:46.978 "code": -17, 00:13:46.978 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:46.978 } 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.978 [2024-10-01 13:47:56.916506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:46.978 [2024-10-01 13:47:56.916564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.978 [2024-10-01 13:47:56.916583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:46.978 [2024-10-01 13:47:56.916597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.978 [2024-10-01 13:47:56.919119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.978 [2024-10-01 13:47:56.919165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:46.978 [2024-10-01 13:47:56.919242] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:46.978 [2024-10-01 13:47:56.919303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:46.978 pt1 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.978 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.978 "name": "raid_bdev1", 00:13:46.978 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:46.978 "strip_size_kb": 64, 00:13:46.978 "state": "configuring", 00:13:46.978 "raid_level": "concat", 00:13:46.978 "superblock": true, 00:13:46.978 "num_base_bdevs": 4, 00:13:46.978 "num_base_bdevs_discovered": 1, 00:13:46.978 "num_base_bdevs_operational": 4, 00:13:46.978 "base_bdevs_list": [ 00:13:46.978 { 00:13:46.978 "name": "pt1", 00:13:46.978 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.978 "is_configured": true, 00:13:46.978 "data_offset": 2048, 00:13:46.978 "data_size": 63488 00:13:46.978 }, 00:13:46.978 { 00:13:46.978 "name": null, 00:13:46.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.979 "is_configured": false, 00:13:46.979 "data_offset": 2048, 00:13:46.979 "data_size": 63488 00:13:46.979 }, 00:13:46.979 { 00:13:46.979 "name": null, 00:13:46.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.979 "is_configured": false, 00:13:46.979 "data_offset": 2048, 00:13:46.979 "data_size": 63488 00:13:46.979 }, 00:13:46.979 { 00:13:46.979 "name": null, 00:13:46.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.979 "is_configured": false, 00:13:46.979 "data_offset": 2048, 00:13:46.979 "data_size": 63488 00:13:46.979 } 00:13:46.979 ] 00:13:46.979 }' 00:13:46.979 13:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.979 13:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.238 [2024-10-01 13:47:57.371897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:47.238 [2024-10-01 13:47:57.372117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.238 [2024-10-01 13:47:57.372177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.238 [2024-10-01 13:47:57.372267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.238 [2024-10-01 13:47:57.372822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.238 [2024-10-01 13:47:57.372864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:47.238 [2024-10-01 13:47:57.372953] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:47.238 [2024-10-01 13:47:57.372980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:47.238 pt2 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.238 [2024-10-01 13:47:57.383878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.238 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.497 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.497 "name": "raid_bdev1", 00:13:47.497 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:47.497 "strip_size_kb": 64, 00:13:47.497 "state": "configuring", 00:13:47.497 "raid_level": "concat", 00:13:47.497 "superblock": true, 00:13:47.497 "num_base_bdevs": 4, 00:13:47.497 "num_base_bdevs_discovered": 1, 00:13:47.497 "num_base_bdevs_operational": 4, 00:13:47.497 "base_bdevs_list": [ 00:13:47.497 { 00:13:47.497 "name": "pt1", 00:13:47.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.497 "is_configured": true, 00:13:47.497 "data_offset": 2048, 00:13:47.497 "data_size": 63488 00:13:47.497 }, 00:13:47.497 { 00:13:47.497 "name": null, 00:13:47.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.497 "is_configured": false, 00:13:47.497 "data_offset": 0, 00:13:47.497 "data_size": 63488 00:13:47.497 }, 00:13:47.497 { 00:13:47.497 "name": null, 00:13:47.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.497 "is_configured": false, 00:13:47.497 "data_offset": 2048, 00:13:47.497 "data_size": 63488 00:13:47.497 }, 00:13:47.497 { 00:13:47.497 "name": null, 00:13:47.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.497 "is_configured": false, 00:13:47.497 "data_offset": 2048, 00:13:47.497 "data_size": 63488 00:13:47.497 } 00:13:47.497 ] 00:13:47.497 }' 00:13:47.497 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.497 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.757 [2024-10-01 13:47:57.827661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:47.757 [2024-10-01 13:47:57.827733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.757 [2024-10-01 13:47:57.827760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:47.757 [2024-10-01 13:47:57.827773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.757 [2024-10-01 13:47:57.828248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.757 [2024-10-01 13:47:57.828269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:47.757 [2024-10-01 13:47:57.828360] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:47.757 [2024-10-01 13:47:57.828386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:47.757 pt2 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.757 [2024-10-01 13:47:57.839643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:47.757 [2024-10-01 13:47:57.839711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.757 [2024-10-01 13:47:57.839744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:47.757 [2024-10-01 13:47:57.839758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.757 [2024-10-01 13:47:57.840202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.757 [2024-10-01 13:47:57.840227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:47.757 [2024-10-01 13:47:57.840307] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:47.757 [2024-10-01 13:47:57.840328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:47.757 pt3 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.757 [2024-10-01 13:47:57.851604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:47.757 [2024-10-01 13:47:57.851666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.757 [2024-10-01 13:47:57.851690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:47.757 [2024-10-01 13:47:57.851701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.757 [2024-10-01 13:47:57.852158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.757 [2024-10-01 13:47:57.852182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:47.757 [2024-10-01 13:47:57.852255] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:47.757 [2024-10-01 13:47:57.852276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:47.757 [2024-10-01 13:47:57.852453] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:47.757 [2024-10-01 13:47:57.852465] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:47.757 [2024-10-01 13:47:57.852746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.757 [2024-10-01 13:47:57.852890] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:47.757 [2024-10-01 13:47:57.852968] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:47.757 [2024-10-01 13:47:57.853124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.757 pt4 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.757 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.757 "name": "raid_bdev1", 00:13:47.757 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:47.757 "strip_size_kb": 64, 00:13:47.757 "state": "online", 00:13:47.757 "raid_level": "concat", 00:13:47.757 "superblock": true, 00:13:47.757 "num_base_bdevs": 4, 00:13:47.757 "num_base_bdevs_discovered": 4, 00:13:47.757 "num_base_bdevs_operational": 4, 00:13:47.757 "base_bdevs_list": [ 00:13:47.757 { 00:13:47.757 "name": "pt1", 00:13:47.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.757 "is_configured": true, 00:13:47.757 "data_offset": 2048, 00:13:47.757 "data_size": 63488 00:13:47.757 }, 00:13:47.757 { 00:13:47.757 "name": "pt2", 00:13:47.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.757 "is_configured": true, 00:13:47.757 "data_offset": 2048, 00:13:47.757 "data_size": 63488 00:13:47.757 }, 00:13:47.757 { 00:13:47.757 "name": "pt3", 00:13:47.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.757 "is_configured": true, 00:13:47.757 "data_offset": 2048, 00:13:47.757 "data_size": 63488 00:13:47.757 }, 00:13:47.757 { 00:13:47.757 "name": "pt4", 00:13:47.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.757 "is_configured": true, 00:13:47.757 "data_offset": 2048, 00:13:47.758 "data_size": 63488 00:13:47.758 } 00:13:47.758 ] 00:13:47.758 }' 00:13:47.758 13:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.758 13:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.324 [2024-10-01 13:47:58.319984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:48.324 "name": "raid_bdev1", 00:13:48.324 "aliases": [ 00:13:48.324 "5340c3f1-8ae7-4676-8bd4-804068d51043" 00:13:48.324 ], 00:13:48.324 "product_name": "Raid Volume", 00:13:48.324 "block_size": 512, 00:13:48.324 "num_blocks": 253952, 00:13:48.324 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:48.324 "assigned_rate_limits": { 00:13:48.324 "rw_ios_per_sec": 0, 00:13:48.324 "rw_mbytes_per_sec": 0, 00:13:48.324 "r_mbytes_per_sec": 0, 00:13:48.324 "w_mbytes_per_sec": 0 00:13:48.324 }, 00:13:48.324 "claimed": false, 00:13:48.324 "zoned": false, 00:13:48.324 "supported_io_types": { 00:13:48.324 "read": true, 00:13:48.324 "write": true, 00:13:48.324 "unmap": true, 00:13:48.324 "flush": true, 00:13:48.324 "reset": true, 00:13:48.324 "nvme_admin": false, 00:13:48.324 "nvme_io": false, 00:13:48.324 "nvme_io_md": false, 00:13:48.324 "write_zeroes": true, 00:13:48.324 "zcopy": false, 00:13:48.324 "get_zone_info": false, 00:13:48.324 "zone_management": false, 00:13:48.324 "zone_append": false, 00:13:48.324 "compare": false, 00:13:48.324 "compare_and_write": false, 00:13:48.324 "abort": false, 00:13:48.324 "seek_hole": false, 00:13:48.324 "seek_data": false, 00:13:48.324 "copy": false, 00:13:48.324 "nvme_iov_md": false 00:13:48.324 }, 00:13:48.324 "memory_domains": [ 00:13:48.324 { 00:13:48.324 "dma_device_id": "system", 00:13:48.324 "dma_device_type": 1 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.324 "dma_device_type": 2 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "dma_device_id": "system", 00:13:48.324 "dma_device_type": 1 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.324 "dma_device_type": 2 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "dma_device_id": "system", 00:13:48.324 "dma_device_type": 1 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.324 "dma_device_type": 2 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "dma_device_id": "system", 00:13:48.324 "dma_device_type": 1 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.324 "dma_device_type": 2 00:13:48.324 } 00:13:48.324 ], 00:13:48.324 "driver_specific": { 00:13:48.324 "raid": { 00:13:48.324 "uuid": "5340c3f1-8ae7-4676-8bd4-804068d51043", 00:13:48.324 "strip_size_kb": 64, 00:13:48.324 "state": "online", 00:13:48.324 "raid_level": "concat", 00:13:48.324 "superblock": true, 00:13:48.324 "num_base_bdevs": 4, 00:13:48.324 "num_base_bdevs_discovered": 4, 00:13:48.324 "num_base_bdevs_operational": 4, 00:13:48.324 "base_bdevs_list": [ 00:13:48.324 { 00:13:48.324 "name": "pt1", 00:13:48.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.324 "is_configured": true, 00:13:48.324 "data_offset": 2048, 00:13:48.324 "data_size": 63488 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "name": "pt2", 00:13:48.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.324 "is_configured": true, 00:13:48.324 "data_offset": 2048, 00:13:48.324 "data_size": 63488 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "name": "pt3", 00:13:48.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.324 "is_configured": true, 00:13:48.324 "data_offset": 2048, 00:13:48.324 "data_size": 63488 00:13:48.324 }, 00:13:48.324 { 00:13:48.324 "name": "pt4", 00:13:48.324 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:48.324 "is_configured": true, 00:13:48.324 "data_offset": 2048, 00:13:48.324 "data_size": 63488 00:13:48.324 } 00:13:48.324 ] 00:13:48.324 } 00:13:48.324 } 00:13:48.324 }' 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:48.324 pt2 00:13:48.324 pt3 00:13:48.324 pt4' 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.324 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.325 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.583 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.584 [2024-10-01 13:47:58.663901] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5340c3f1-8ae7-4676-8bd4-804068d51043 '!=' 5340c3f1-8ae7-4676-8bd4-804068d51043 ']' 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72523 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72523 ']' 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72523 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72523 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:48.584 killing process with pid 72523 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72523' 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72523 00:13:48.584 [2024-10-01 13:47:58.746670] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.584 [2024-10-01 13:47:58.746782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.584 13:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72523 00:13:48.584 [2024-10-01 13:47:58.746860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.584 [2024-10-01 13:47:58.746873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:49.181 [2024-10-01 13:47:59.179185] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.559 13:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:50.559 00:13:50.559 real 0m5.994s 00:13:50.559 user 0m8.530s 00:13:50.559 sys 0m1.096s 00:13:50.559 13:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.559 13:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.559 ************************************ 00:13:50.559 END TEST raid_superblock_test 00:13:50.559 ************************************ 00:13:50.559 13:48:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:50.559 13:48:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:50.559 13:48:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.559 13:48:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.559 ************************************ 00:13:50.559 START TEST raid_read_error_test 00:13:50.559 ************************************ 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SweJUbXeAQ 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72783 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72783 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72783 ']' 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.559 13:48:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.559 [2024-10-01 13:48:00.746998] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:50.559 [2024-10-01 13:48:00.747135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72783 ] 00:13:50.817 [2024-10-01 13:48:00.909803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.077 [2024-10-01 13:48:01.149359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.335 [2024-10-01 13:48:01.381412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.335 [2024-10-01 13:48:01.381471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.593 BaseBdev1_malloc 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.593 true 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.593 [2024-10-01 13:48:01.693149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:51.593 [2024-10-01 13:48:01.693274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.593 [2024-10-01 13:48:01.693302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:51.593 [2024-10-01 13:48:01.693318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.593 [2024-10-01 13:48:01.695968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.593 [2024-10-01 13:48:01.696017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.593 BaseBdev1 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.593 BaseBdev2_malloc 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.593 true 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.593 [2024-10-01 13:48:01.777358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:51.593 [2024-10-01 13:48:01.777446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.593 [2024-10-01 13:48:01.777467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:51.593 [2024-10-01 13:48:01.777482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.593 [2024-10-01 13:48:01.780273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.593 [2024-10-01 13:48:01.780453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.593 BaseBdev2 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.593 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:51.852 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 BaseBdev3_malloc 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 true 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 [2024-10-01 13:48:01.850248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:51.853 [2024-10-01 13:48:01.850307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.853 [2024-10-01 13:48:01.850329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:51.853 [2024-10-01 13:48:01.850344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.853 [2024-10-01 13:48:01.852970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.853 [2024-10-01 13:48:01.853016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:51.853 BaseBdev3 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 BaseBdev4_malloc 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 true 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 [2024-10-01 13:48:01.921814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:51.853 [2024-10-01 13:48:01.921879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.853 [2024-10-01 13:48:01.921903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:51.853 [2024-10-01 13:48:01.921921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.853 [2024-10-01 13:48:01.924545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.853 [2024-10-01 13:48:01.924752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:51.853 BaseBdev4 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 [2024-10-01 13:48:01.933860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.853 [2024-10-01 13:48:01.936113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.853 [2024-10-01 13:48:01.936326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.853 [2024-10-01 13:48:01.936414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:51.853 [2024-10-01 13:48:01.936632] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:51.853 [2024-10-01 13:48:01.936649] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:51.853 [2024-10-01 13:48:01.936923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:51.853 [2024-10-01 13:48:01.937093] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:51.853 [2024-10-01 13:48:01.937104] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:51.853 [2024-10-01 13:48:01.937265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.853 "name": "raid_bdev1", 00:13:51.853 "uuid": "6c258069-0a6f-4311-b94a-b6d55275e985", 00:13:51.853 "strip_size_kb": 64, 00:13:51.853 "state": "online", 00:13:51.853 "raid_level": "concat", 00:13:51.853 "superblock": true, 00:13:51.853 "num_base_bdevs": 4, 00:13:51.853 "num_base_bdevs_discovered": 4, 00:13:51.853 "num_base_bdevs_operational": 4, 00:13:51.853 "base_bdevs_list": [ 00:13:51.853 { 00:13:51.853 "name": "BaseBdev1", 00:13:51.853 "uuid": "ce99b42a-e7f3-5b9d-82e5-393ce9a90b53", 00:13:51.853 "is_configured": true, 00:13:51.853 "data_offset": 2048, 00:13:51.853 "data_size": 63488 00:13:51.853 }, 00:13:51.853 { 00:13:51.853 "name": "BaseBdev2", 00:13:51.853 "uuid": "de0a0202-8323-5be7-9030-4ddeca99b6fb", 00:13:51.853 "is_configured": true, 00:13:51.853 "data_offset": 2048, 00:13:51.853 "data_size": 63488 00:13:51.853 }, 00:13:51.853 { 00:13:51.853 "name": "BaseBdev3", 00:13:51.853 "uuid": "6e94470f-0ec3-50b9-a324-ca91a265f4ba", 00:13:51.853 "is_configured": true, 00:13:51.853 "data_offset": 2048, 00:13:51.853 "data_size": 63488 00:13:51.853 }, 00:13:51.853 { 00:13:51.853 "name": "BaseBdev4", 00:13:51.853 "uuid": "9cf61313-086f-5463-a116-0d0e031bd7f4", 00:13:51.853 "is_configured": true, 00:13:51.853 "data_offset": 2048, 00:13:51.853 "data_size": 63488 00:13:51.853 } 00:13:51.853 ] 00:13:51.853 }' 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.853 13:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.421 13:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:52.421 13:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:52.421 [2024-10-01 13:48:02.478525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.357 "name": "raid_bdev1", 00:13:53.357 "uuid": "6c258069-0a6f-4311-b94a-b6d55275e985", 00:13:53.357 "strip_size_kb": 64, 00:13:53.357 "state": "online", 00:13:53.357 "raid_level": "concat", 00:13:53.357 "superblock": true, 00:13:53.357 "num_base_bdevs": 4, 00:13:53.357 "num_base_bdevs_discovered": 4, 00:13:53.357 "num_base_bdevs_operational": 4, 00:13:53.357 "base_bdevs_list": [ 00:13:53.357 { 00:13:53.357 "name": "BaseBdev1", 00:13:53.357 "uuid": "ce99b42a-e7f3-5b9d-82e5-393ce9a90b53", 00:13:53.357 "is_configured": true, 00:13:53.357 "data_offset": 2048, 00:13:53.357 "data_size": 63488 00:13:53.357 }, 00:13:53.357 { 00:13:53.357 "name": "BaseBdev2", 00:13:53.357 "uuid": "de0a0202-8323-5be7-9030-4ddeca99b6fb", 00:13:53.357 "is_configured": true, 00:13:53.357 "data_offset": 2048, 00:13:53.357 "data_size": 63488 00:13:53.357 }, 00:13:53.357 { 00:13:53.357 "name": "BaseBdev3", 00:13:53.357 "uuid": "6e94470f-0ec3-50b9-a324-ca91a265f4ba", 00:13:53.357 "is_configured": true, 00:13:53.357 "data_offset": 2048, 00:13:53.357 "data_size": 63488 00:13:53.357 }, 00:13:53.357 { 00:13:53.357 "name": "BaseBdev4", 00:13:53.357 "uuid": "9cf61313-086f-5463-a116-0d0e031bd7f4", 00:13:53.357 "is_configured": true, 00:13:53.357 "data_offset": 2048, 00:13:53.357 "data_size": 63488 00:13:53.357 } 00:13:53.357 ] 00:13:53.357 }' 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.357 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.924 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.924 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.924 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.925 [2024-10-01 13:48:03.829665] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.925 [2024-10-01 13:48:03.829842] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.925 [2024-10-01 13:48:03.832775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.925 [2024-10-01 13:48:03.832834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.925 [2024-10-01 13:48:03.832878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.925 [2024-10-01 13:48:03.832893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:53.925 { 00:13:53.925 "results": [ 00:13:53.925 { 00:13:53.925 "job": "raid_bdev1", 00:13:53.925 "core_mask": "0x1", 00:13:53.925 "workload": "randrw", 00:13:53.925 "percentage": 50, 00:13:53.925 "status": "finished", 00:13:53.925 "queue_depth": 1, 00:13:53.925 "io_size": 131072, 00:13:53.925 "runtime": 1.351251, 00:13:53.925 "iops": 14627.926269804795, 00:13:53.925 "mibps": 1828.4907837255994, 00:13:53.925 "io_failed": 1, 00:13:53.925 "io_timeout": 0, 00:13:53.925 "avg_latency_us": 94.77103842089662, 00:13:53.925 "min_latency_us": 28.58152610441767, 00:13:53.925 "max_latency_us": 1677.879518072289 00:13:53.925 } 00:13:53.925 ], 00:13:53.925 "core_count": 1 00:13:53.925 } 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72783 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72783 ']' 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72783 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72783 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72783' 00:13:53.925 killing process with pid 72783 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72783 00:13:53.925 [2024-10-01 13:48:03.881824] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.925 13:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72783 00:13:54.183 [2024-10-01 13:48:04.240555] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.559 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:55.559 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SweJUbXeAQ 00:13:55.559 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:55.819 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:55.819 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:55.819 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.819 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:55.819 13:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:55.819 00:13:55.819 real 0m5.129s 00:13:55.819 user 0m5.922s 00:13:55.819 sys 0m0.690s 00:13:55.819 13:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.819 13:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.819 ************************************ 00:13:55.819 END TEST raid_read_error_test 00:13:55.819 ************************************ 00:13:55.819 13:48:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:55.819 13:48:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:55.819 13:48:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.819 13:48:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.819 ************************************ 00:13:55.819 START TEST raid_write_error_test 00:13:55.819 ************************************ 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gQedQjcXOS 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72939 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72939 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72939 ']' 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.819 13:48:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.819 [2024-10-01 13:48:05.987013] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:55.819 [2024-10-01 13:48:05.987191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:13:56.077 [2024-10-01 13:48:06.177421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.335 [2024-10-01 13:48:06.418077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.593 [2024-10-01 13:48:06.648246] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.593 [2024-10-01 13:48:06.648292] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.852 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.852 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:56.852 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:56.852 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:56.852 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.852 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 BaseBdev1_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 true 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 [2024-10-01 13:48:07.106711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:57.111 [2024-10-01 13:48:07.106900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.111 [2024-10-01 13:48:07.106961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:57.111 [2024-10-01 13:48:07.107053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.111 [2024-10-01 13:48:07.109860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.111 [2024-10-01 13:48:07.110029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.111 BaseBdev1 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 BaseBdev2_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 true 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 [2024-10-01 13:48:07.188191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:57.111 [2024-10-01 13:48:07.188258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.111 [2024-10-01 13:48:07.188279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:57.111 [2024-10-01 13:48:07.188294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.111 [2024-10-01 13:48:07.190840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.111 [2024-10-01 13:48:07.190901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:57.111 BaseBdev2 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 BaseBdev3_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.111 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.112 true 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.112 [2024-10-01 13:48:07.259889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:57.112 [2024-10-01 13:48:07.260086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.112 [2024-10-01 13:48:07.260146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:57.112 [2024-10-01 13:48:07.260271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.112 [2024-10-01 13:48:07.262968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.112 [2024-10-01 13:48:07.263014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:57.112 BaseBdev3 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.112 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.371 BaseBdev4_malloc 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.371 true 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.371 [2024-10-01 13:48:07.331601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:57.371 [2024-10-01 13:48:07.331676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.371 [2024-10-01 13:48:07.331708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:57.371 [2024-10-01 13:48:07.331734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.371 [2024-10-01 13:48:07.334335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.371 [2024-10-01 13:48:07.334385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:57.371 BaseBdev4 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.371 [2024-10-01 13:48:07.343722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.371 [2024-10-01 13:48:07.345984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.371 [2024-10-01 13:48:07.346097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.371 [2024-10-01 13:48:07.346163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.371 [2024-10-01 13:48:07.346412] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:57.371 [2024-10-01 13:48:07.346431] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:57.371 [2024-10-01 13:48:07.346699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:57.371 [2024-10-01 13:48:07.346870] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:57.371 [2024-10-01 13:48:07.346881] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:57.371 [2024-10-01 13:48:07.347040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.371 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.371 "name": "raid_bdev1", 00:13:57.371 "uuid": "1c26c34d-dcac-4e66-83ae-8661590ec1c7", 00:13:57.371 "strip_size_kb": 64, 00:13:57.371 "state": "online", 00:13:57.371 "raid_level": "concat", 00:13:57.371 "superblock": true, 00:13:57.371 "num_base_bdevs": 4, 00:13:57.371 "num_base_bdevs_discovered": 4, 00:13:57.371 "num_base_bdevs_operational": 4, 00:13:57.371 "base_bdevs_list": [ 00:13:57.371 { 00:13:57.371 "name": "BaseBdev1", 00:13:57.371 "uuid": "ad37f3ae-b967-5968-9ba6-a783593afd66", 00:13:57.371 "is_configured": true, 00:13:57.371 "data_offset": 2048, 00:13:57.371 "data_size": 63488 00:13:57.371 }, 00:13:57.371 { 00:13:57.371 "name": "BaseBdev2", 00:13:57.371 "uuid": "cf216aec-f13d-5015-bd78-5d2dd1e8bbf4", 00:13:57.371 "is_configured": true, 00:13:57.371 "data_offset": 2048, 00:13:57.371 "data_size": 63488 00:13:57.371 }, 00:13:57.371 { 00:13:57.371 "name": "BaseBdev3", 00:13:57.371 "uuid": "1fa84e71-1e87-5a3a-8ec9-f067e8ea2080", 00:13:57.371 "is_configured": true, 00:13:57.371 "data_offset": 2048, 00:13:57.371 "data_size": 63488 00:13:57.371 }, 00:13:57.371 { 00:13:57.371 "name": "BaseBdev4", 00:13:57.371 "uuid": "6554b29c-7038-5e4f-84e8-702d135ffbce", 00:13:57.371 "is_configured": true, 00:13:57.371 "data_offset": 2048, 00:13:57.371 "data_size": 63488 00:13:57.371 } 00:13:57.371 ] 00:13:57.372 }' 00:13:57.372 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.372 13:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.938 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:57.938 13:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:57.938 [2024-10-01 13:48:07.929032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.903 "name": "raid_bdev1", 00:13:58.903 "uuid": "1c26c34d-dcac-4e66-83ae-8661590ec1c7", 00:13:58.903 "strip_size_kb": 64, 00:13:58.903 "state": "online", 00:13:58.903 "raid_level": "concat", 00:13:58.903 "superblock": true, 00:13:58.903 "num_base_bdevs": 4, 00:13:58.903 "num_base_bdevs_discovered": 4, 00:13:58.903 "num_base_bdevs_operational": 4, 00:13:58.903 "base_bdevs_list": [ 00:13:58.903 { 00:13:58.903 "name": "BaseBdev1", 00:13:58.903 "uuid": "ad37f3ae-b967-5968-9ba6-a783593afd66", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 2048, 00:13:58.903 "data_size": 63488 00:13:58.903 }, 00:13:58.903 { 00:13:58.903 "name": "BaseBdev2", 00:13:58.903 "uuid": "cf216aec-f13d-5015-bd78-5d2dd1e8bbf4", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 2048, 00:13:58.903 "data_size": 63488 00:13:58.903 }, 00:13:58.903 { 00:13:58.903 "name": "BaseBdev3", 00:13:58.903 "uuid": "1fa84e71-1e87-5a3a-8ec9-f067e8ea2080", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 2048, 00:13:58.903 "data_size": 63488 00:13:58.903 }, 00:13:58.903 { 00:13:58.903 "name": "BaseBdev4", 00:13:58.903 "uuid": "6554b29c-7038-5e4f-84e8-702d135ffbce", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 2048, 00:13:58.903 "data_size": 63488 00:13:58.903 } 00:13:58.903 ] 00:13:58.903 }' 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.903 13:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.161 13:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.161 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.161 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.161 [2024-10-01 13:48:09.310554] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.161 [2024-10-01 13:48:09.310605] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.161 [2024-10-01 13:48:09.313575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.161 [2024-10-01 13:48:09.313750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.161 [2024-10-01 13:48:09.313834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.161 [2024-10-01 13:48:09.314031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:59.161 { 00:13:59.161 "results": [ 00:13:59.161 { 00:13:59.161 "job": "raid_bdev1", 00:13:59.161 "core_mask": "0x1", 00:13:59.161 "workload": "randrw", 00:13:59.161 "percentage": 50, 00:13:59.161 "status": "finished", 00:13:59.161 "queue_depth": 1, 00:13:59.161 "io_size": 131072, 00:13:59.161 "runtime": 1.381246, 00:13:59.161 "iops": 14697.599124268958, 00:13:59.161 "mibps": 1837.1998905336197, 00:13:59.161 "io_failed": 1, 00:13:59.161 "io_timeout": 0, 00:13:59.161 "avg_latency_us": 94.24505073787417, 00:13:59.161 "min_latency_us": 27.142168674698794, 00:13:59.161 "max_latency_us": 1539.701204819277 00:13:59.162 } 00:13:59.162 ], 00:13:59.162 "core_count": 1 00:13:59.162 } 00:13:59.162 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.162 13:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72939 00:13:59.162 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72939 ']' 00:13:59.162 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72939 00:13:59.162 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:59.162 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.162 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72939 00:13:59.421 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.421 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.421 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72939' 00:13:59.421 killing process with pid 72939 00:13:59.421 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72939 00:13:59.421 [2024-10-01 13:48:09.366396] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.421 13:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72939 00:13:59.680 [2024-10-01 13:48:09.721167] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gQedQjcXOS 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:01.058 ************************************ 00:14:01.058 END TEST raid_write_error_test 00:14:01.058 ************************************ 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:01.058 00:14:01.058 real 0m5.314s 00:14:01.058 user 0m6.385s 00:14:01.058 sys 0m0.692s 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.058 13:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.058 13:48:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:01.058 13:48:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:01.058 13:48:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:01.058 13:48:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.058 13:48:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.058 ************************************ 00:14:01.058 START TEST raid_state_function_test 00:14:01.058 ************************************ 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73088 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73088' 00:14:01.058 Process raid pid: 73088 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73088 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73088 ']' 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.058 13:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 [2024-10-01 13:48:11.338698] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:01.316 [2024-10-01 13:48:11.338834] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.572 [2024-10-01 13:48:11.510231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.572 [2024-10-01 13:48:11.732099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.829 [2024-10-01 13:48:11.953604] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.829 [2024-10-01 13:48:11.953881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.087 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.087 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:02.087 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:02.087 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.087 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.087 [2024-10-01 13:48:12.193324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.087 [2024-10-01 13:48:12.193381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.087 [2024-10-01 13:48:12.193406] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.087 [2024-10-01 13:48:12.193421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.087 [2024-10-01 13:48:12.193429] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.087 [2024-10-01 13:48:12.193441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.087 [2024-10-01 13:48:12.193450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:02.088 [2024-10-01 13:48:12.193464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.088 "name": "Existed_Raid", 00:14:02.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.088 "strip_size_kb": 0, 00:14:02.088 "state": "configuring", 00:14:02.088 "raid_level": "raid1", 00:14:02.088 "superblock": false, 00:14:02.088 "num_base_bdevs": 4, 00:14:02.088 "num_base_bdevs_discovered": 0, 00:14:02.088 "num_base_bdevs_operational": 4, 00:14:02.088 "base_bdevs_list": [ 00:14:02.088 { 00:14:02.088 "name": "BaseBdev1", 00:14:02.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.088 "is_configured": false, 00:14:02.088 "data_offset": 0, 00:14:02.088 "data_size": 0 00:14:02.088 }, 00:14:02.088 { 00:14:02.088 "name": "BaseBdev2", 00:14:02.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.088 "is_configured": false, 00:14:02.088 "data_offset": 0, 00:14:02.088 "data_size": 0 00:14:02.088 }, 00:14:02.088 { 00:14:02.088 "name": "BaseBdev3", 00:14:02.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.088 "is_configured": false, 00:14:02.088 "data_offset": 0, 00:14:02.088 "data_size": 0 00:14:02.088 }, 00:14:02.088 { 00:14:02.088 "name": "BaseBdev4", 00:14:02.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.088 "is_configured": false, 00:14:02.088 "data_offset": 0, 00:14:02.088 "data_size": 0 00:14:02.088 } 00:14:02.088 ] 00:14:02.088 }' 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.088 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.654 [2024-10-01 13:48:12.604679] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.654 [2024-10-01 13:48:12.604731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.654 [2024-10-01 13:48:12.616703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.654 [2024-10-01 13:48:12.616763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.654 [2024-10-01 13:48:12.616774] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.654 [2024-10-01 13:48:12.616788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.654 [2024-10-01 13:48:12.616798] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.654 [2024-10-01 13:48:12.616811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.654 [2024-10-01 13:48:12.616820] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:02.654 [2024-10-01 13:48:12.616833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.654 [2024-10-01 13:48:12.676626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.654 BaseBdev1 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.654 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.654 [ 00:14:02.654 { 00:14:02.654 "name": "BaseBdev1", 00:14:02.654 "aliases": [ 00:14:02.654 "423913c7-f30b-4ef0-ba88-2ae93719ee78" 00:14:02.654 ], 00:14:02.654 "product_name": "Malloc disk", 00:14:02.654 "block_size": 512, 00:14:02.654 "num_blocks": 65536, 00:14:02.654 "uuid": "423913c7-f30b-4ef0-ba88-2ae93719ee78", 00:14:02.654 "assigned_rate_limits": { 00:14:02.654 "rw_ios_per_sec": 0, 00:14:02.654 "rw_mbytes_per_sec": 0, 00:14:02.655 "r_mbytes_per_sec": 0, 00:14:02.655 "w_mbytes_per_sec": 0 00:14:02.655 }, 00:14:02.655 "claimed": true, 00:14:02.655 "claim_type": "exclusive_write", 00:14:02.655 "zoned": false, 00:14:02.655 "supported_io_types": { 00:14:02.655 "read": true, 00:14:02.655 "write": true, 00:14:02.655 "unmap": true, 00:14:02.655 "flush": true, 00:14:02.655 "reset": true, 00:14:02.655 "nvme_admin": false, 00:14:02.655 "nvme_io": false, 00:14:02.655 "nvme_io_md": false, 00:14:02.655 "write_zeroes": true, 00:14:02.655 "zcopy": true, 00:14:02.655 "get_zone_info": false, 00:14:02.655 "zone_management": false, 00:14:02.655 "zone_append": false, 00:14:02.655 "compare": false, 00:14:02.655 "compare_and_write": false, 00:14:02.655 "abort": true, 00:14:02.655 "seek_hole": false, 00:14:02.655 "seek_data": false, 00:14:02.655 "copy": true, 00:14:02.655 "nvme_iov_md": false 00:14:02.655 }, 00:14:02.655 "memory_domains": [ 00:14:02.655 { 00:14:02.655 "dma_device_id": "system", 00:14:02.655 "dma_device_type": 1 00:14:02.655 }, 00:14:02.655 { 00:14:02.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.655 "dma_device_type": 2 00:14:02.655 } 00:14:02.655 ], 00:14:02.655 "driver_specific": {} 00:14:02.655 } 00:14:02.655 ] 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.655 "name": "Existed_Raid", 00:14:02.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.655 "strip_size_kb": 0, 00:14:02.655 "state": "configuring", 00:14:02.655 "raid_level": "raid1", 00:14:02.655 "superblock": false, 00:14:02.655 "num_base_bdevs": 4, 00:14:02.655 "num_base_bdevs_discovered": 1, 00:14:02.655 "num_base_bdevs_operational": 4, 00:14:02.655 "base_bdevs_list": [ 00:14:02.655 { 00:14:02.655 "name": "BaseBdev1", 00:14:02.655 "uuid": "423913c7-f30b-4ef0-ba88-2ae93719ee78", 00:14:02.655 "is_configured": true, 00:14:02.655 "data_offset": 0, 00:14:02.655 "data_size": 65536 00:14:02.655 }, 00:14:02.655 { 00:14:02.655 "name": "BaseBdev2", 00:14:02.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.655 "is_configured": false, 00:14:02.655 "data_offset": 0, 00:14:02.655 "data_size": 0 00:14:02.655 }, 00:14:02.655 { 00:14:02.655 "name": "BaseBdev3", 00:14:02.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.655 "is_configured": false, 00:14:02.655 "data_offset": 0, 00:14:02.655 "data_size": 0 00:14:02.655 }, 00:14:02.655 { 00:14:02.655 "name": "BaseBdev4", 00:14:02.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.655 "is_configured": false, 00:14:02.655 "data_offset": 0, 00:14:02.655 "data_size": 0 00:14:02.655 } 00:14:02.655 ] 00:14:02.655 }' 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.655 13:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.220 [2024-10-01 13:48:13.120067] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.220 [2024-10-01 13:48:13.121547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.220 [2024-10-01 13:48:13.132087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.220 [2024-10-01 13:48:13.134334] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.220 [2024-10-01 13:48:13.134385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.220 [2024-10-01 13:48:13.134411] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.220 [2024-10-01 13:48:13.134427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.220 [2024-10-01 13:48:13.134436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:03.220 [2024-10-01 13:48:13.134448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.220 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.221 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.221 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.221 "name": "Existed_Raid", 00:14:03.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.221 "strip_size_kb": 0, 00:14:03.221 "state": "configuring", 00:14:03.221 "raid_level": "raid1", 00:14:03.221 "superblock": false, 00:14:03.221 "num_base_bdevs": 4, 00:14:03.221 "num_base_bdevs_discovered": 1, 00:14:03.221 "num_base_bdevs_operational": 4, 00:14:03.221 "base_bdevs_list": [ 00:14:03.221 { 00:14:03.221 "name": "BaseBdev1", 00:14:03.221 "uuid": "423913c7-f30b-4ef0-ba88-2ae93719ee78", 00:14:03.221 "is_configured": true, 00:14:03.221 "data_offset": 0, 00:14:03.221 "data_size": 65536 00:14:03.221 }, 00:14:03.221 { 00:14:03.221 "name": "BaseBdev2", 00:14:03.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.221 "is_configured": false, 00:14:03.221 "data_offset": 0, 00:14:03.221 "data_size": 0 00:14:03.221 }, 00:14:03.221 { 00:14:03.221 "name": "BaseBdev3", 00:14:03.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.221 "is_configured": false, 00:14:03.221 "data_offset": 0, 00:14:03.221 "data_size": 0 00:14:03.221 }, 00:14:03.221 { 00:14:03.221 "name": "BaseBdev4", 00:14:03.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.221 "is_configured": false, 00:14:03.221 "data_offset": 0, 00:14:03.221 "data_size": 0 00:14:03.221 } 00:14:03.221 ] 00:14:03.221 }' 00:14:03.221 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.221 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.479 [2024-10-01 13:48:13.625143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.479 BaseBdev2 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.479 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.479 [ 00:14:03.479 { 00:14:03.479 "name": "BaseBdev2", 00:14:03.479 "aliases": [ 00:14:03.479 "d9ba0751-9432-4e12-bd61-71b081f64915" 00:14:03.479 ], 00:14:03.479 "product_name": "Malloc disk", 00:14:03.479 "block_size": 512, 00:14:03.479 "num_blocks": 65536, 00:14:03.479 "uuid": "d9ba0751-9432-4e12-bd61-71b081f64915", 00:14:03.479 "assigned_rate_limits": { 00:14:03.479 "rw_ios_per_sec": 0, 00:14:03.479 "rw_mbytes_per_sec": 0, 00:14:03.479 "r_mbytes_per_sec": 0, 00:14:03.479 "w_mbytes_per_sec": 0 00:14:03.479 }, 00:14:03.479 "claimed": true, 00:14:03.479 "claim_type": "exclusive_write", 00:14:03.479 "zoned": false, 00:14:03.479 "supported_io_types": { 00:14:03.479 "read": true, 00:14:03.479 "write": true, 00:14:03.479 "unmap": true, 00:14:03.479 "flush": true, 00:14:03.479 "reset": true, 00:14:03.479 "nvme_admin": false, 00:14:03.479 "nvme_io": false, 00:14:03.479 "nvme_io_md": false, 00:14:03.479 "write_zeroes": true, 00:14:03.479 "zcopy": true, 00:14:03.479 "get_zone_info": false, 00:14:03.479 "zone_management": false, 00:14:03.479 "zone_append": false, 00:14:03.479 "compare": false, 00:14:03.479 "compare_and_write": false, 00:14:03.479 "abort": true, 00:14:03.479 "seek_hole": false, 00:14:03.479 "seek_data": false, 00:14:03.479 "copy": true, 00:14:03.479 "nvme_iov_md": false 00:14:03.479 }, 00:14:03.738 "memory_domains": [ 00:14:03.738 { 00:14:03.738 "dma_device_id": "system", 00:14:03.738 "dma_device_type": 1 00:14:03.738 }, 00:14:03.738 { 00:14:03.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.738 "dma_device_type": 2 00:14:03.738 } 00:14:03.738 ], 00:14:03.738 "driver_specific": {} 00:14:03.738 } 00:14:03.738 ] 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.738 "name": "Existed_Raid", 00:14:03.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.738 "strip_size_kb": 0, 00:14:03.738 "state": "configuring", 00:14:03.738 "raid_level": "raid1", 00:14:03.738 "superblock": false, 00:14:03.738 "num_base_bdevs": 4, 00:14:03.738 "num_base_bdevs_discovered": 2, 00:14:03.738 "num_base_bdevs_operational": 4, 00:14:03.738 "base_bdevs_list": [ 00:14:03.738 { 00:14:03.738 "name": "BaseBdev1", 00:14:03.738 "uuid": "423913c7-f30b-4ef0-ba88-2ae93719ee78", 00:14:03.738 "is_configured": true, 00:14:03.738 "data_offset": 0, 00:14:03.738 "data_size": 65536 00:14:03.738 }, 00:14:03.738 { 00:14:03.738 "name": "BaseBdev2", 00:14:03.738 "uuid": "d9ba0751-9432-4e12-bd61-71b081f64915", 00:14:03.738 "is_configured": true, 00:14:03.738 "data_offset": 0, 00:14:03.738 "data_size": 65536 00:14:03.738 }, 00:14:03.738 { 00:14:03.738 "name": "BaseBdev3", 00:14:03.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.738 "is_configured": false, 00:14:03.738 "data_offset": 0, 00:14:03.738 "data_size": 0 00:14:03.738 }, 00:14:03.738 { 00:14:03.738 "name": "BaseBdev4", 00:14:03.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.738 "is_configured": false, 00:14:03.738 "data_offset": 0, 00:14:03.738 "data_size": 0 00:14:03.738 } 00:14:03.738 ] 00:14:03.738 }' 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.738 13:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.997 [2024-10-01 13:48:14.148073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.997 BaseBdev3 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.997 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.997 [ 00:14:03.997 { 00:14:03.997 "name": "BaseBdev3", 00:14:03.997 "aliases": [ 00:14:03.997 "0fcdbe21-48de-4882-9180-cd552cfdd06f" 00:14:03.997 ], 00:14:03.997 "product_name": "Malloc disk", 00:14:03.997 "block_size": 512, 00:14:03.997 "num_blocks": 65536, 00:14:03.997 "uuid": "0fcdbe21-48de-4882-9180-cd552cfdd06f", 00:14:03.997 "assigned_rate_limits": { 00:14:03.997 "rw_ios_per_sec": 0, 00:14:03.997 "rw_mbytes_per_sec": 0, 00:14:03.997 "r_mbytes_per_sec": 0, 00:14:03.997 "w_mbytes_per_sec": 0 00:14:03.997 }, 00:14:03.997 "claimed": true, 00:14:03.997 "claim_type": "exclusive_write", 00:14:03.997 "zoned": false, 00:14:03.997 "supported_io_types": { 00:14:03.997 "read": true, 00:14:03.997 "write": true, 00:14:03.997 "unmap": true, 00:14:03.997 "flush": true, 00:14:03.997 "reset": true, 00:14:04.256 "nvme_admin": false, 00:14:04.256 "nvme_io": false, 00:14:04.256 "nvme_io_md": false, 00:14:04.256 "write_zeroes": true, 00:14:04.256 "zcopy": true, 00:14:04.256 "get_zone_info": false, 00:14:04.256 "zone_management": false, 00:14:04.256 "zone_append": false, 00:14:04.256 "compare": false, 00:14:04.256 "compare_and_write": false, 00:14:04.256 "abort": true, 00:14:04.256 "seek_hole": false, 00:14:04.256 "seek_data": false, 00:14:04.256 "copy": true, 00:14:04.256 "nvme_iov_md": false 00:14:04.256 }, 00:14:04.256 "memory_domains": [ 00:14:04.256 { 00:14:04.256 "dma_device_id": "system", 00:14:04.256 "dma_device_type": 1 00:14:04.256 }, 00:14:04.256 { 00:14:04.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.256 "dma_device_type": 2 00:14:04.256 } 00:14:04.256 ], 00:14:04.256 "driver_specific": {} 00:14:04.256 } 00:14:04.256 ] 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.256 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.256 "name": "Existed_Raid", 00:14:04.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.257 "strip_size_kb": 0, 00:14:04.257 "state": "configuring", 00:14:04.257 "raid_level": "raid1", 00:14:04.257 "superblock": false, 00:14:04.257 "num_base_bdevs": 4, 00:14:04.257 "num_base_bdevs_discovered": 3, 00:14:04.257 "num_base_bdevs_operational": 4, 00:14:04.257 "base_bdevs_list": [ 00:14:04.257 { 00:14:04.257 "name": "BaseBdev1", 00:14:04.257 "uuid": "423913c7-f30b-4ef0-ba88-2ae93719ee78", 00:14:04.257 "is_configured": true, 00:14:04.257 "data_offset": 0, 00:14:04.257 "data_size": 65536 00:14:04.257 }, 00:14:04.257 { 00:14:04.257 "name": "BaseBdev2", 00:14:04.257 "uuid": "d9ba0751-9432-4e12-bd61-71b081f64915", 00:14:04.257 "is_configured": true, 00:14:04.257 "data_offset": 0, 00:14:04.257 "data_size": 65536 00:14:04.257 }, 00:14:04.257 { 00:14:04.257 "name": "BaseBdev3", 00:14:04.257 "uuid": "0fcdbe21-48de-4882-9180-cd552cfdd06f", 00:14:04.257 "is_configured": true, 00:14:04.257 "data_offset": 0, 00:14:04.257 "data_size": 65536 00:14:04.257 }, 00:14:04.257 { 00:14:04.257 "name": "BaseBdev4", 00:14:04.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.257 "is_configured": false, 00:14:04.257 "data_offset": 0, 00:14:04.257 "data_size": 0 00:14:04.257 } 00:14:04.257 ] 00:14:04.257 }' 00:14:04.257 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.257 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.515 [2024-10-01 13:48:14.666587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.515 [2024-10-01 13:48:14.666646] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:04.515 [2024-10-01 13:48:14.666656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:04.515 [2024-10-01 13:48:14.666964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:04.515 [2024-10-01 13:48:14.667165] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:04.515 [2024-10-01 13:48:14.667186] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:04.515 [2024-10-01 13:48:14.667544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.515 BaseBdev4 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.515 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.515 [ 00:14:04.515 { 00:14:04.515 "name": "BaseBdev4", 00:14:04.515 "aliases": [ 00:14:04.515 "3de0d90e-f07f-466c-b23d-8bc38deb1bb2" 00:14:04.515 ], 00:14:04.515 "product_name": "Malloc disk", 00:14:04.515 "block_size": 512, 00:14:04.515 "num_blocks": 65536, 00:14:04.515 "uuid": "3de0d90e-f07f-466c-b23d-8bc38deb1bb2", 00:14:04.515 "assigned_rate_limits": { 00:14:04.515 "rw_ios_per_sec": 0, 00:14:04.515 "rw_mbytes_per_sec": 0, 00:14:04.515 "r_mbytes_per_sec": 0, 00:14:04.515 "w_mbytes_per_sec": 0 00:14:04.515 }, 00:14:04.515 "claimed": true, 00:14:04.515 "claim_type": "exclusive_write", 00:14:04.515 "zoned": false, 00:14:04.515 "supported_io_types": { 00:14:04.515 "read": true, 00:14:04.515 "write": true, 00:14:04.515 "unmap": true, 00:14:04.515 "flush": true, 00:14:04.515 "reset": true, 00:14:04.773 "nvme_admin": false, 00:14:04.773 "nvme_io": false, 00:14:04.773 "nvme_io_md": false, 00:14:04.773 "write_zeroes": true, 00:14:04.773 "zcopy": true, 00:14:04.773 "get_zone_info": false, 00:14:04.773 "zone_management": false, 00:14:04.773 "zone_append": false, 00:14:04.773 "compare": false, 00:14:04.773 "compare_and_write": false, 00:14:04.773 "abort": true, 00:14:04.773 "seek_hole": false, 00:14:04.773 "seek_data": false, 00:14:04.773 "copy": true, 00:14:04.773 "nvme_iov_md": false 00:14:04.773 }, 00:14:04.773 "memory_domains": [ 00:14:04.773 { 00:14:04.773 "dma_device_id": "system", 00:14:04.773 "dma_device_type": 1 00:14:04.773 }, 00:14:04.773 { 00:14:04.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.773 "dma_device_type": 2 00:14:04.773 } 00:14:04.773 ], 00:14:04.773 "driver_specific": {} 00:14:04.773 } 00:14:04.773 ] 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.773 "name": "Existed_Raid", 00:14:04.773 "uuid": "a75cbcb8-7162-439b-a067-eab767c5a6ed", 00:14:04.773 "strip_size_kb": 0, 00:14:04.773 "state": "online", 00:14:04.773 "raid_level": "raid1", 00:14:04.773 "superblock": false, 00:14:04.773 "num_base_bdevs": 4, 00:14:04.773 "num_base_bdevs_discovered": 4, 00:14:04.773 "num_base_bdevs_operational": 4, 00:14:04.773 "base_bdevs_list": [ 00:14:04.773 { 00:14:04.773 "name": "BaseBdev1", 00:14:04.773 "uuid": "423913c7-f30b-4ef0-ba88-2ae93719ee78", 00:14:04.773 "is_configured": true, 00:14:04.773 "data_offset": 0, 00:14:04.773 "data_size": 65536 00:14:04.773 }, 00:14:04.773 { 00:14:04.773 "name": "BaseBdev2", 00:14:04.773 "uuid": "d9ba0751-9432-4e12-bd61-71b081f64915", 00:14:04.773 "is_configured": true, 00:14:04.773 "data_offset": 0, 00:14:04.773 "data_size": 65536 00:14:04.773 }, 00:14:04.773 { 00:14:04.773 "name": "BaseBdev3", 00:14:04.773 "uuid": "0fcdbe21-48de-4882-9180-cd552cfdd06f", 00:14:04.773 "is_configured": true, 00:14:04.773 "data_offset": 0, 00:14:04.773 "data_size": 65536 00:14:04.773 }, 00:14:04.773 { 00:14:04.773 "name": "BaseBdev4", 00:14:04.773 "uuid": "3de0d90e-f07f-466c-b23d-8bc38deb1bb2", 00:14:04.773 "is_configured": true, 00:14:04.773 "data_offset": 0, 00:14:04.773 "data_size": 65536 00:14:04.773 } 00:14:04.773 ] 00:14:04.773 }' 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.773 13:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.031 [2024-10-01 13:48:15.182322] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.031 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.031 "name": "Existed_Raid", 00:14:05.031 "aliases": [ 00:14:05.031 "a75cbcb8-7162-439b-a067-eab767c5a6ed" 00:14:05.031 ], 00:14:05.031 "product_name": "Raid Volume", 00:14:05.031 "block_size": 512, 00:14:05.031 "num_blocks": 65536, 00:14:05.031 "uuid": "a75cbcb8-7162-439b-a067-eab767c5a6ed", 00:14:05.031 "assigned_rate_limits": { 00:14:05.031 "rw_ios_per_sec": 0, 00:14:05.031 "rw_mbytes_per_sec": 0, 00:14:05.031 "r_mbytes_per_sec": 0, 00:14:05.031 "w_mbytes_per_sec": 0 00:14:05.031 }, 00:14:05.031 "claimed": false, 00:14:05.031 "zoned": false, 00:14:05.031 "supported_io_types": { 00:14:05.031 "read": true, 00:14:05.031 "write": true, 00:14:05.031 "unmap": false, 00:14:05.031 "flush": false, 00:14:05.031 "reset": true, 00:14:05.031 "nvme_admin": false, 00:14:05.031 "nvme_io": false, 00:14:05.031 "nvme_io_md": false, 00:14:05.031 "write_zeroes": true, 00:14:05.031 "zcopy": false, 00:14:05.031 "get_zone_info": false, 00:14:05.031 "zone_management": false, 00:14:05.031 "zone_append": false, 00:14:05.031 "compare": false, 00:14:05.031 "compare_and_write": false, 00:14:05.031 "abort": false, 00:14:05.031 "seek_hole": false, 00:14:05.031 "seek_data": false, 00:14:05.031 "copy": false, 00:14:05.031 "nvme_iov_md": false 00:14:05.031 }, 00:14:05.031 "memory_domains": [ 00:14:05.031 { 00:14:05.031 "dma_device_id": "system", 00:14:05.031 "dma_device_type": 1 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.031 "dma_device_type": 2 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "dma_device_id": "system", 00:14:05.031 "dma_device_type": 1 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.031 "dma_device_type": 2 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "dma_device_id": "system", 00:14:05.031 "dma_device_type": 1 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.031 "dma_device_type": 2 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "dma_device_id": "system", 00:14:05.031 "dma_device_type": 1 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.031 "dma_device_type": 2 00:14:05.031 } 00:14:05.031 ], 00:14:05.031 "driver_specific": { 00:14:05.031 "raid": { 00:14:05.031 "uuid": "a75cbcb8-7162-439b-a067-eab767c5a6ed", 00:14:05.031 "strip_size_kb": 0, 00:14:05.031 "state": "online", 00:14:05.031 "raid_level": "raid1", 00:14:05.031 "superblock": false, 00:14:05.031 "num_base_bdevs": 4, 00:14:05.031 "num_base_bdevs_discovered": 4, 00:14:05.031 "num_base_bdevs_operational": 4, 00:14:05.031 "base_bdevs_list": [ 00:14:05.031 { 00:14:05.031 "name": "BaseBdev1", 00:14:05.031 "uuid": "423913c7-f30b-4ef0-ba88-2ae93719ee78", 00:14:05.031 "is_configured": true, 00:14:05.031 "data_offset": 0, 00:14:05.031 "data_size": 65536 00:14:05.031 }, 00:14:05.031 { 00:14:05.031 "name": "BaseBdev2", 00:14:05.031 "uuid": "d9ba0751-9432-4e12-bd61-71b081f64915", 00:14:05.031 "is_configured": true, 00:14:05.031 "data_offset": 0, 00:14:05.031 "data_size": 65536 00:14:05.031 }, 00:14:05.031 { 00:14:05.032 "name": "BaseBdev3", 00:14:05.032 "uuid": "0fcdbe21-48de-4882-9180-cd552cfdd06f", 00:14:05.032 "is_configured": true, 00:14:05.032 "data_offset": 0, 00:14:05.032 "data_size": 65536 00:14:05.032 }, 00:14:05.032 { 00:14:05.032 "name": "BaseBdev4", 00:14:05.032 "uuid": "3de0d90e-f07f-466c-b23d-8bc38deb1bb2", 00:14:05.032 "is_configured": true, 00:14:05.032 "data_offset": 0, 00:14:05.032 "data_size": 65536 00:14:05.032 } 00:14:05.032 ] 00:14:05.032 } 00:14:05.032 } 00:14:05.032 }' 00:14:05.290 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:05.291 BaseBdev2 00:14:05.291 BaseBdev3 00:14:05.291 BaseBdev4' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.291 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.549 [2024-10-01 13:48:15.513609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.549 "name": "Existed_Raid", 00:14:05.549 "uuid": "a75cbcb8-7162-439b-a067-eab767c5a6ed", 00:14:05.549 "strip_size_kb": 0, 00:14:05.549 "state": "online", 00:14:05.549 "raid_level": "raid1", 00:14:05.549 "superblock": false, 00:14:05.549 "num_base_bdevs": 4, 00:14:05.549 "num_base_bdevs_discovered": 3, 00:14:05.549 "num_base_bdevs_operational": 3, 00:14:05.549 "base_bdevs_list": [ 00:14:05.549 { 00:14:05.549 "name": null, 00:14:05.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.549 "is_configured": false, 00:14:05.549 "data_offset": 0, 00:14:05.549 "data_size": 65536 00:14:05.549 }, 00:14:05.549 { 00:14:05.549 "name": "BaseBdev2", 00:14:05.549 "uuid": "d9ba0751-9432-4e12-bd61-71b081f64915", 00:14:05.549 "is_configured": true, 00:14:05.549 "data_offset": 0, 00:14:05.549 "data_size": 65536 00:14:05.549 }, 00:14:05.549 { 00:14:05.549 "name": "BaseBdev3", 00:14:05.549 "uuid": "0fcdbe21-48de-4882-9180-cd552cfdd06f", 00:14:05.549 "is_configured": true, 00:14:05.549 "data_offset": 0, 00:14:05.549 "data_size": 65536 00:14:05.549 }, 00:14:05.549 { 00:14:05.549 "name": "BaseBdev4", 00:14:05.549 "uuid": "3de0d90e-f07f-466c-b23d-8bc38deb1bb2", 00:14:05.549 "is_configured": true, 00:14:05.549 "data_offset": 0, 00:14:05.549 "data_size": 65536 00:14:05.549 } 00:14:05.549 ] 00:14:05.549 }' 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.549 13:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.114 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 [2024-10-01 13:48:16.102350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.115 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 [2024-10-01 13:48:16.264605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 [2024-10-01 13:48:16.425668] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:06.373 [2024-10-01 13:48:16.425770] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.373 [2024-10-01 13:48:16.532154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.373 [2024-10-01 13:48:16.532228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.373 [2024-10-01 13:48:16.532245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 BaseBdev2 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 [ 00:14:06.633 { 00:14:06.633 "name": "BaseBdev2", 00:14:06.633 "aliases": [ 00:14:06.633 "989545a9-2d1d-41ac-89b0-05954e8ad486" 00:14:06.633 ], 00:14:06.633 "product_name": "Malloc disk", 00:14:06.633 "block_size": 512, 00:14:06.633 "num_blocks": 65536, 00:14:06.633 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:06.633 "assigned_rate_limits": { 00:14:06.633 "rw_ios_per_sec": 0, 00:14:06.633 "rw_mbytes_per_sec": 0, 00:14:06.633 "r_mbytes_per_sec": 0, 00:14:06.633 "w_mbytes_per_sec": 0 00:14:06.633 }, 00:14:06.633 "claimed": false, 00:14:06.633 "zoned": false, 00:14:06.633 "supported_io_types": { 00:14:06.633 "read": true, 00:14:06.633 "write": true, 00:14:06.633 "unmap": true, 00:14:06.633 "flush": true, 00:14:06.633 "reset": true, 00:14:06.633 "nvme_admin": false, 00:14:06.633 "nvme_io": false, 00:14:06.633 "nvme_io_md": false, 00:14:06.633 "write_zeroes": true, 00:14:06.633 "zcopy": true, 00:14:06.633 "get_zone_info": false, 00:14:06.633 "zone_management": false, 00:14:06.633 "zone_append": false, 00:14:06.633 "compare": false, 00:14:06.633 "compare_and_write": false, 00:14:06.633 "abort": true, 00:14:06.633 "seek_hole": false, 00:14:06.633 "seek_data": false, 00:14:06.633 "copy": true, 00:14:06.633 "nvme_iov_md": false 00:14:06.633 }, 00:14:06.633 "memory_domains": [ 00:14:06.633 { 00:14:06.633 "dma_device_id": "system", 00:14:06.633 "dma_device_type": 1 00:14:06.633 }, 00:14:06.633 { 00:14:06.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.633 "dma_device_type": 2 00:14:06.633 } 00:14:06.633 ], 00:14:06.633 "driver_specific": {} 00:14:06.633 } 00:14:06.633 ] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 BaseBdev3 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 [ 00:14:06.633 { 00:14:06.633 "name": "BaseBdev3", 00:14:06.633 "aliases": [ 00:14:06.633 "f59bef17-b6b7-4589-8db5-f5cd62a52f8d" 00:14:06.633 ], 00:14:06.633 "product_name": "Malloc disk", 00:14:06.633 "block_size": 512, 00:14:06.633 "num_blocks": 65536, 00:14:06.633 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:06.633 "assigned_rate_limits": { 00:14:06.633 "rw_ios_per_sec": 0, 00:14:06.633 "rw_mbytes_per_sec": 0, 00:14:06.633 "r_mbytes_per_sec": 0, 00:14:06.633 "w_mbytes_per_sec": 0 00:14:06.633 }, 00:14:06.633 "claimed": false, 00:14:06.633 "zoned": false, 00:14:06.633 "supported_io_types": { 00:14:06.633 "read": true, 00:14:06.633 "write": true, 00:14:06.633 "unmap": true, 00:14:06.633 "flush": true, 00:14:06.633 "reset": true, 00:14:06.633 "nvme_admin": false, 00:14:06.633 "nvme_io": false, 00:14:06.633 "nvme_io_md": false, 00:14:06.633 "write_zeroes": true, 00:14:06.633 "zcopy": true, 00:14:06.633 "get_zone_info": false, 00:14:06.633 "zone_management": false, 00:14:06.633 "zone_append": false, 00:14:06.633 "compare": false, 00:14:06.633 "compare_and_write": false, 00:14:06.633 "abort": true, 00:14:06.633 "seek_hole": false, 00:14:06.633 "seek_data": false, 00:14:06.633 "copy": true, 00:14:06.633 "nvme_iov_md": false 00:14:06.633 }, 00:14:06.633 "memory_domains": [ 00:14:06.633 { 00:14:06.633 "dma_device_id": "system", 00:14:06.633 "dma_device_type": 1 00:14:06.633 }, 00:14:06.633 { 00:14:06.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.633 "dma_device_type": 2 00:14:06.633 } 00:14:06.633 ], 00:14:06.633 "driver_specific": {} 00:14:06.633 } 00:14:06.633 ] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 BaseBdev4 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.633 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.892 [ 00:14:06.892 { 00:14:06.892 "name": "BaseBdev4", 00:14:06.892 "aliases": [ 00:14:06.892 "9733502a-4b93-42a5-b284-e8712dae245e" 00:14:06.892 ], 00:14:06.892 "product_name": "Malloc disk", 00:14:06.892 "block_size": 512, 00:14:06.892 "num_blocks": 65536, 00:14:06.892 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:06.892 "assigned_rate_limits": { 00:14:06.892 "rw_ios_per_sec": 0, 00:14:06.892 "rw_mbytes_per_sec": 0, 00:14:06.892 "r_mbytes_per_sec": 0, 00:14:06.892 "w_mbytes_per_sec": 0 00:14:06.892 }, 00:14:06.892 "claimed": false, 00:14:06.892 "zoned": false, 00:14:06.892 "supported_io_types": { 00:14:06.892 "read": true, 00:14:06.892 "write": true, 00:14:06.892 "unmap": true, 00:14:06.892 "flush": true, 00:14:06.892 "reset": true, 00:14:06.892 "nvme_admin": false, 00:14:06.892 "nvme_io": false, 00:14:06.892 "nvme_io_md": false, 00:14:06.892 "write_zeroes": true, 00:14:06.892 "zcopy": true, 00:14:06.892 "get_zone_info": false, 00:14:06.892 "zone_management": false, 00:14:06.892 "zone_append": false, 00:14:06.892 "compare": false, 00:14:06.892 "compare_and_write": false, 00:14:06.892 "abort": true, 00:14:06.892 "seek_hole": false, 00:14:06.892 "seek_data": false, 00:14:06.892 "copy": true, 00:14:06.892 "nvme_iov_md": false 00:14:06.892 }, 00:14:06.892 "memory_domains": [ 00:14:06.892 { 00:14:06.892 "dma_device_id": "system", 00:14:06.892 "dma_device_type": 1 00:14:06.892 }, 00:14:06.892 { 00:14:06.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.892 "dma_device_type": 2 00:14:06.892 } 00:14:06.892 ], 00:14:06.892 "driver_specific": {} 00:14:06.892 } 00:14:06.892 ] 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.892 [2024-10-01 13:48:16.872031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.892 [2024-10-01 13:48:16.872220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.892 [2024-10-01 13:48:16.872324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.892 [2024-10-01 13:48:16.874756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.892 [2024-10-01 13:48:16.874928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.892 "name": "Existed_Raid", 00:14:06.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.892 "strip_size_kb": 0, 00:14:06.892 "state": "configuring", 00:14:06.892 "raid_level": "raid1", 00:14:06.892 "superblock": false, 00:14:06.892 "num_base_bdevs": 4, 00:14:06.892 "num_base_bdevs_discovered": 3, 00:14:06.892 "num_base_bdevs_operational": 4, 00:14:06.892 "base_bdevs_list": [ 00:14:06.892 { 00:14:06.892 "name": "BaseBdev1", 00:14:06.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.892 "is_configured": false, 00:14:06.892 "data_offset": 0, 00:14:06.892 "data_size": 0 00:14:06.892 }, 00:14:06.892 { 00:14:06.892 "name": "BaseBdev2", 00:14:06.892 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:06.892 "is_configured": true, 00:14:06.892 "data_offset": 0, 00:14:06.892 "data_size": 65536 00:14:06.892 }, 00:14:06.892 { 00:14:06.892 "name": "BaseBdev3", 00:14:06.892 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:06.892 "is_configured": true, 00:14:06.892 "data_offset": 0, 00:14:06.892 "data_size": 65536 00:14:06.892 }, 00:14:06.892 { 00:14:06.892 "name": "BaseBdev4", 00:14:06.892 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:06.892 "is_configured": true, 00:14:06.892 "data_offset": 0, 00:14:06.892 "data_size": 65536 00:14:06.892 } 00:14:06.892 ] 00:14:06.892 }' 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.892 13:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.150 [2024-10-01 13:48:17.331672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.150 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.151 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.151 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.151 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.151 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.409 "name": "Existed_Raid", 00:14:07.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.409 "strip_size_kb": 0, 00:14:07.409 "state": "configuring", 00:14:07.409 "raid_level": "raid1", 00:14:07.409 "superblock": false, 00:14:07.409 "num_base_bdevs": 4, 00:14:07.409 "num_base_bdevs_discovered": 2, 00:14:07.409 "num_base_bdevs_operational": 4, 00:14:07.409 "base_bdevs_list": [ 00:14:07.409 { 00:14:07.409 "name": "BaseBdev1", 00:14:07.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.409 "is_configured": false, 00:14:07.409 "data_offset": 0, 00:14:07.409 "data_size": 0 00:14:07.409 }, 00:14:07.409 { 00:14:07.409 "name": null, 00:14:07.409 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:07.409 "is_configured": false, 00:14:07.409 "data_offset": 0, 00:14:07.409 "data_size": 65536 00:14:07.409 }, 00:14:07.409 { 00:14:07.409 "name": "BaseBdev3", 00:14:07.409 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:07.409 "is_configured": true, 00:14:07.409 "data_offset": 0, 00:14:07.409 "data_size": 65536 00:14:07.409 }, 00:14:07.409 { 00:14:07.409 "name": "BaseBdev4", 00:14:07.409 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:07.409 "is_configured": true, 00:14:07.409 "data_offset": 0, 00:14:07.409 "data_size": 65536 00:14:07.409 } 00:14:07.409 ] 00:14:07.409 }' 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.409 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.668 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.925 [2024-10-01 13:48:17.865010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.925 BaseBdev1 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.925 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.926 [ 00:14:07.926 { 00:14:07.926 "name": "BaseBdev1", 00:14:07.926 "aliases": [ 00:14:07.926 "f637e5a8-469f-41b1-a935-6b333a2c2d3c" 00:14:07.926 ], 00:14:07.926 "product_name": "Malloc disk", 00:14:07.926 "block_size": 512, 00:14:07.926 "num_blocks": 65536, 00:14:07.926 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:07.926 "assigned_rate_limits": { 00:14:07.926 "rw_ios_per_sec": 0, 00:14:07.926 "rw_mbytes_per_sec": 0, 00:14:07.926 "r_mbytes_per_sec": 0, 00:14:07.926 "w_mbytes_per_sec": 0 00:14:07.926 }, 00:14:07.926 "claimed": true, 00:14:07.926 "claim_type": "exclusive_write", 00:14:07.926 "zoned": false, 00:14:07.926 "supported_io_types": { 00:14:07.926 "read": true, 00:14:07.926 "write": true, 00:14:07.926 "unmap": true, 00:14:07.926 "flush": true, 00:14:07.926 "reset": true, 00:14:07.926 "nvme_admin": false, 00:14:07.926 "nvme_io": false, 00:14:07.926 "nvme_io_md": false, 00:14:07.926 "write_zeroes": true, 00:14:07.926 "zcopy": true, 00:14:07.926 "get_zone_info": false, 00:14:07.926 "zone_management": false, 00:14:07.926 "zone_append": false, 00:14:07.926 "compare": false, 00:14:07.926 "compare_and_write": false, 00:14:07.926 "abort": true, 00:14:07.926 "seek_hole": false, 00:14:07.926 "seek_data": false, 00:14:07.926 "copy": true, 00:14:07.926 "nvme_iov_md": false 00:14:07.926 }, 00:14:07.926 "memory_domains": [ 00:14:07.926 { 00:14:07.926 "dma_device_id": "system", 00:14:07.926 "dma_device_type": 1 00:14:07.926 }, 00:14:07.926 { 00:14:07.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.926 "dma_device_type": 2 00:14:07.926 } 00:14:07.926 ], 00:14:07.926 "driver_specific": {} 00:14:07.926 } 00:14:07.926 ] 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.926 "name": "Existed_Raid", 00:14:07.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.926 "strip_size_kb": 0, 00:14:07.926 "state": "configuring", 00:14:07.926 "raid_level": "raid1", 00:14:07.926 "superblock": false, 00:14:07.926 "num_base_bdevs": 4, 00:14:07.926 "num_base_bdevs_discovered": 3, 00:14:07.926 "num_base_bdevs_operational": 4, 00:14:07.926 "base_bdevs_list": [ 00:14:07.926 { 00:14:07.926 "name": "BaseBdev1", 00:14:07.926 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:07.926 "is_configured": true, 00:14:07.926 "data_offset": 0, 00:14:07.926 "data_size": 65536 00:14:07.926 }, 00:14:07.926 { 00:14:07.926 "name": null, 00:14:07.926 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:07.926 "is_configured": false, 00:14:07.926 "data_offset": 0, 00:14:07.926 "data_size": 65536 00:14:07.926 }, 00:14:07.926 { 00:14:07.926 "name": "BaseBdev3", 00:14:07.926 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:07.926 "is_configured": true, 00:14:07.926 "data_offset": 0, 00:14:07.926 "data_size": 65536 00:14:07.926 }, 00:14:07.926 { 00:14:07.926 "name": "BaseBdev4", 00:14:07.926 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:07.926 "is_configured": true, 00:14:07.926 "data_offset": 0, 00:14:07.926 "data_size": 65536 00:14:07.926 } 00:14:07.926 ] 00:14:07.926 }' 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.926 13:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.184 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.184 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.184 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.184 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.443 [2024-10-01 13:48:18.412577] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.443 "name": "Existed_Raid", 00:14:08.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.443 "strip_size_kb": 0, 00:14:08.443 "state": "configuring", 00:14:08.443 "raid_level": "raid1", 00:14:08.443 "superblock": false, 00:14:08.443 "num_base_bdevs": 4, 00:14:08.443 "num_base_bdevs_discovered": 2, 00:14:08.443 "num_base_bdevs_operational": 4, 00:14:08.443 "base_bdevs_list": [ 00:14:08.443 { 00:14:08.443 "name": "BaseBdev1", 00:14:08.443 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:08.443 "is_configured": true, 00:14:08.443 "data_offset": 0, 00:14:08.443 "data_size": 65536 00:14:08.443 }, 00:14:08.443 { 00:14:08.443 "name": null, 00:14:08.443 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:08.443 "is_configured": false, 00:14:08.443 "data_offset": 0, 00:14:08.443 "data_size": 65536 00:14:08.443 }, 00:14:08.443 { 00:14:08.443 "name": null, 00:14:08.443 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:08.443 "is_configured": false, 00:14:08.443 "data_offset": 0, 00:14:08.443 "data_size": 65536 00:14:08.443 }, 00:14:08.443 { 00:14:08.443 "name": "BaseBdev4", 00:14:08.443 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:08.443 "is_configured": true, 00:14:08.443 "data_offset": 0, 00:14:08.443 "data_size": 65536 00:14:08.443 } 00:14:08.443 ] 00:14:08.443 }' 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.443 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.701 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:08.701 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.701 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.701 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.959 [2024-10-01 13:48:18.919950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.959 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.959 "name": "Existed_Raid", 00:14:08.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.959 "strip_size_kb": 0, 00:14:08.959 "state": "configuring", 00:14:08.959 "raid_level": "raid1", 00:14:08.959 "superblock": false, 00:14:08.959 "num_base_bdevs": 4, 00:14:08.959 "num_base_bdevs_discovered": 3, 00:14:08.959 "num_base_bdevs_operational": 4, 00:14:08.959 "base_bdevs_list": [ 00:14:08.959 { 00:14:08.959 "name": "BaseBdev1", 00:14:08.959 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:08.959 "is_configured": true, 00:14:08.959 "data_offset": 0, 00:14:08.959 "data_size": 65536 00:14:08.959 }, 00:14:08.959 { 00:14:08.959 "name": null, 00:14:08.959 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:08.959 "is_configured": false, 00:14:08.959 "data_offset": 0, 00:14:08.959 "data_size": 65536 00:14:08.960 }, 00:14:08.960 { 00:14:08.960 "name": "BaseBdev3", 00:14:08.960 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:08.960 "is_configured": true, 00:14:08.960 "data_offset": 0, 00:14:08.960 "data_size": 65536 00:14:08.960 }, 00:14:08.960 { 00:14:08.960 "name": "BaseBdev4", 00:14:08.960 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:08.960 "is_configured": true, 00:14:08.960 "data_offset": 0, 00:14:08.960 "data_size": 65536 00:14:08.960 } 00:14:08.960 ] 00:14:08.960 }' 00:14:08.960 13:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.960 13:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.218 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.218 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:09.218 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.219 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.219 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.219 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:09.219 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:09.219 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.219 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.219 [2024-10-01 13:48:19.407693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.513 "name": "Existed_Raid", 00:14:09.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.513 "strip_size_kb": 0, 00:14:09.513 "state": "configuring", 00:14:09.513 "raid_level": "raid1", 00:14:09.513 "superblock": false, 00:14:09.513 "num_base_bdevs": 4, 00:14:09.513 "num_base_bdevs_discovered": 2, 00:14:09.513 "num_base_bdevs_operational": 4, 00:14:09.513 "base_bdevs_list": [ 00:14:09.513 { 00:14:09.513 "name": null, 00:14:09.513 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:09.513 "is_configured": false, 00:14:09.513 "data_offset": 0, 00:14:09.513 "data_size": 65536 00:14:09.513 }, 00:14:09.513 { 00:14:09.513 "name": null, 00:14:09.513 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:09.513 "is_configured": false, 00:14:09.513 "data_offset": 0, 00:14:09.513 "data_size": 65536 00:14:09.513 }, 00:14:09.513 { 00:14:09.513 "name": "BaseBdev3", 00:14:09.513 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:09.513 "is_configured": true, 00:14:09.513 "data_offset": 0, 00:14:09.513 "data_size": 65536 00:14:09.513 }, 00:14:09.513 { 00:14:09.513 "name": "BaseBdev4", 00:14:09.513 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:09.513 "is_configured": true, 00:14:09.513 "data_offset": 0, 00:14:09.513 "data_size": 65536 00:14:09.513 } 00:14:09.513 ] 00:14:09.513 }' 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.513 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.785 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.785 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.785 13:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:09.785 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.043 13:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.043 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:10.043 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:10.043 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.043 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.043 [2024-10-01 13:48:20.016368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.043 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.044 "name": "Existed_Raid", 00:14:10.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.044 "strip_size_kb": 0, 00:14:10.044 "state": "configuring", 00:14:10.044 "raid_level": "raid1", 00:14:10.044 "superblock": false, 00:14:10.044 "num_base_bdevs": 4, 00:14:10.044 "num_base_bdevs_discovered": 3, 00:14:10.044 "num_base_bdevs_operational": 4, 00:14:10.044 "base_bdevs_list": [ 00:14:10.044 { 00:14:10.044 "name": null, 00:14:10.044 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:10.044 "is_configured": false, 00:14:10.044 "data_offset": 0, 00:14:10.044 "data_size": 65536 00:14:10.044 }, 00:14:10.044 { 00:14:10.044 "name": "BaseBdev2", 00:14:10.044 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:10.044 "is_configured": true, 00:14:10.044 "data_offset": 0, 00:14:10.044 "data_size": 65536 00:14:10.044 }, 00:14:10.044 { 00:14:10.044 "name": "BaseBdev3", 00:14:10.044 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:10.044 "is_configured": true, 00:14:10.044 "data_offset": 0, 00:14:10.044 "data_size": 65536 00:14:10.044 }, 00:14:10.044 { 00:14:10.044 "name": "BaseBdev4", 00:14:10.044 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:10.044 "is_configured": true, 00:14:10.044 "data_offset": 0, 00:14:10.044 "data_size": 65536 00:14:10.044 } 00:14:10.044 ] 00:14:10.044 }' 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.044 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.302 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.302 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:10.302 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.302 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.302 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f637e5a8-469f-41b1-a935-6b333a2c2d3c 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.559 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.560 [2024-10-01 13:48:20.588200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:10.560 [2024-10-01 13:48:20.588267] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:10.560 [2024-10-01 13:48:20.588282] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:10.560 [2024-10-01 13:48:20.588635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:10.560 [2024-10-01 13:48:20.588839] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:10.560 [2024-10-01 13:48:20.588850] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:10.560 [2024-10-01 13:48:20.589142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.560 NewBaseBdev 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.560 [ 00:14:10.560 { 00:14:10.560 "name": "NewBaseBdev", 00:14:10.560 "aliases": [ 00:14:10.560 "f637e5a8-469f-41b1-a935-6b333a2c2d3c" 00:14:10.560 ], 00:14:10.560 "product_name": "Malloc disk", 00:14:10.560 "block_size": 512, 00:14:10.560 "num_blocks": 65536, 00:14:10.560 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:10.560 "assigned_rate_limits": { 00:14:10.560 "rw_ios_per_sec": 0, 00:14:10.560 "rw_mbytes_per_sec": 0, 00:14:10.560 "r_mbytes_per_sec": 0, 00:14:10.560 "w_mbytes_per_sec": 0 00:14:10.560 }, 00:14:10.560 "claimed": true, 00:14:10.560 "claim_type": "exclusive_write", 00:14:10.560 "zoned": false, 00:14:10.560 "supported_io_types": { 00:14:10.560 "read": true, 00:14:10.560 "write": true, 00:14:10.560 "unmap": true, 00:14:10.560 "flush": true, 00:14:10.560 "reset": true, 00:14:10.560 "nvme_admin": false, 00:14:10.560 "nvme_io": false, 00:14:10.560 "nvme_io_md": false, 00:14:10.560 "write_zeroes": true, 00:14:10.560 "zcopy": true, 00:14:10.560 "get_zone_info": false, 00:14:10.560 "zone_management": false, 00:14:10.560 "zone_append": false, 00:14:10.560 "compare": false, 00:14:10.560 "compare_and_write": false, 00:14:10.560 "abort": true, 00:14:10.560 "seek_hole": false, 00:14:10.560 "seek_data": false, 00:14:10.560 "copy": true, 00:14:10.560 "nvme_iov_md": false 00:14:10.560 }, 00:14:10.560 "memory_domains": [ 00:14:10.560 { 00:14:10.560 "dma_device_id": "system", 00:14:10.560 "dma_device_type": 1 00:14:10.560 }, 00:14:10.560 { 00:14:10.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.560 "dma_device_type": 2 00:14:10.560 } 00:14:10.560 ], 00:14:10.560 "driver_specific": {} 00:14:10.560 } 00:14:10.560 ] 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.560 "name": "Existed_Raid", 00:14:10.560 "uuid": "ec92f82c-5328-4fb2-b895-f7190523ba93", 00:14:10.560 "strip_size_kb": 0, 00:14:10.560 "state": "online", 00:14:10.560 "raid_level": "raid1", 00:14:10.560 "superblock": false, 00:14:10.560 "num_base_bdevs": 4, 00:14:10.560 "num_base_bdevs_discovered": 4, 00:14:10.560 "num_base_bdevs_operational": 4, 00:14:10.560 "base_bdevs_list": [ 00:14:10.560 { 00:14:10.560 "name": "NewBaseBdev", 00:14:10.560 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:10.560 "is_configured": true, 00:14:10.560 "data_offset": 0, 00:14:10.560 "data_size": 65536 00:14:10.560 }, 00:14:10.560 { 00:14:10.560 "name": "BaseBdev2", 00:14:10.560 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:10.560 "is_configured": true, 00:14:10.560 "data_offset": 0, 00:14:10.560 "data_size": 65536 00:14:10.560 }, 00:14:10.560 { 00:14:10.560 "name": "BaseBdev3", 00:14:10.560 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:10.560 "is_configured": true, 00:14:10.560 "data_offset": 0, 00:14:10.560 "data_size": 65536 00:14:10.560 }, 00:14:10.560 { 00:14:10.560 "name": "BaseBdev4", 00:14:10.560 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:10.560 "is_configured": true, 00:14:10.560 "data_offset": 0, 00:14:10.560 "data_size": 65536 00:14:10.560 } 00:14:10.560 ] 00:14:10.560 }' 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.560 13:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.127 [2024-10-01 13:48:21.096006] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.127 "name": "Existed_Raid", 00:14:11.127 "aliases": [ 00:14:11.127 "ec92f82c-5328-4fb2-b895-f7190523ba93" 00:14:11.127 ], 00:14:11.127 "product_name": "Raid Volume", 00:14:11.127 "block_size": 512, 00:14:11.127 "num_blocks": 65536, 00:14:11.127 "uuid": "ec92f82c-5328-4fb2-b895-f7190523ba93", 00:14:11.127 "assigned_rate_limits": { 00:14:11.127 "rw_ios_per_sec": 0, 00:14:11.127 "rw_mbytes_per_sec": 0, 00:14:11.127 "r_mbytes_per_sec": 0, 00:14:11.127 "w_mbytes_per_sec": 0 00:14:11.127 }, 00:14:11.127 "claimed": false, 00:14:11.127 "zoned": false, 00:14:11.127 "supported_io_types": { 00:14:11.127 "read": true, 00:14:11.127 "write": true, 00:14:11.127 "unmap": false, 00:14:11.127 "flush": false, 00:14:11.127 "reset": true, 00:14:11.127 "nvme_admin": false, 00:14:11.127 "nvme_io": false, 00:14:11.127 "nvme_io_md": false, 00:14:11.127 "write_zeroes": true, 00:14:11.127 "zcopy": false, 00:14:11.127 "get_zone_info": false, 00:14:11.127 "zone_management": false, 00:14:11.127 "zone_append": false, 00:14:11.127 "compare": false, 00:14:11.127 "compare_and_write": false, 00:14:11.127 "abort": false, 00:14:11.127 "seek_hole": false, 00:14:11.127 "seek_data": false, 00:14:11.127 "copy": false, 00:14:11.127 "nvme_iov_md": false 00:14:11.127 }, 00:14:11.127 "memory_domains": [ 00:14:11.127 { 00:14:11.127 "dma_device_id": "system", 00:14:11.127 "dma_device_type": 1 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.127 "dma_device_type": 2 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "dma_device_id": "system", 00:14:11.127 "dma_device_type": 1 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.127 "dma_device_type": 2 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "dma_device_id": "system", 00:14:11.127 "dma_device_type": 1 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.127 "dma_device_type": 2 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "dma_device_id": "system", 00:14:11.127 "dma_device_type": 1 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.127 "dma_device_type": 2 00:14:11.127 } 00:14:11.127 ], 00:14:11.127 "driver_specific": { 00:14:11.127 "raid": { 00:14:11.127 "uuid": "ec92f82c-5328-4fb2-b895-f7190523ba93", 00:14:11.127 "strip_size_kb": 0, 00:14:11.127 "state": "online", 00:14:11.127 "raid_level": "raid1", 00:14:11.127 "superblock": false, 00:14:11.127 "num_base_bdevs": 4, 00:14:11.127 "num_base_bdevs_discovered": 4, 00:14:11.127 "num_base_bdevs_operational": 4, 00:14:11.127 "base_bdevs_list": [ 00:14:11.127 { 00:14:11.127 "name": "NewBaseBdev", 00:14:11.127 "uuid": "f637e5a8-469f-41b1-a935-6b333a2c2d3c", 00:14:11.127 "is_configured": true, 00:14:11.127 "data_offset": 0, 00:14:11.127 "data_size": 65536 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "name": "BaseBdev2", 00:14:11.127 "uuid": "989545a9-2d1d-41ac-89b0-05954e8ad486", 00:14:11.127 "is_configured": true, 00:14:11.127 "data_offset": 0, 00:14:11.127 "data_size": 65536 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "name": "BaseBdev3", 00:14:11.127 "uuid": "f59bef17-b6b7-4589-8db5-f5cd62a52f8d", 00:14:11.127 "is_configured": true, 00:14:11.127 "data_offset": 0, 00:14:11.127 "data_size": 65536 00:14:11.127 }, 00:14:11.127 { 00:14:11.127 "name": "BaseBdev4", 00:14:11.127 "uuid": "9733502a-4b93-42a5-b284-e8712dae245e", 00:14:11.127 "is_configured": true, 00:14:11.127 "data_offset": 0, 00:14:11.127 "data_size": 65536 00:14:11.127 } 00:14:11.127 ] 00:14:11.127 } 00:14:11.127 } 00:14:11.127 }' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:11.127 BaseBdev2 00:14:11.127 BaseBdev3 00:14:11.127 BaseBdev4' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.127 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.128 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.386 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.386 [2024-10-01 13:48:21.423787] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.386 [2024-10-01 13:48:21.423835] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.387 [2024-10-01 13:48:21.424079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.387 [2024-10-01 13:48:21.424555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.387 [2024-10-01 13:48:21.424578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73088 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73088 ']' 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73088 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73088 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.387 killing process with pid 73088 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73088' 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73088 00:14:11.387 [2024-10-01 13:48:21.475964] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.387 13:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73088 00:14:12.020 [2024-10-01 13:48:21.916057] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:13.436 ************************************ 00:14:13.436 END TEST raid_state_function_test 00:14:13.436 ************************************ 00:14:13.436 00:14:13.436 real 0m12.109s 00:14:13.436 user 0m18.979s 00:14:13.436 sys 0m2.423s 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.436 13:48:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:13.436 13:48:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:13.436 13:48:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.436 13:48:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.436 ************************************ 00:14:13.436 START TEST raid_state_function_test_sb 00:14:13.436 ************************************ 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:13.436 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73766 00:14:13.437 Process raid pid: 73766 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73766' 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73766 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73766 ']' 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.437 13:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.437 [2024-10-01 13:48:23.523068] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:13.437 [2024-10-01 13:48:23.523219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.694 [2024-10-01 13:48:23.698679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.952 [2024-10-01 13:48:23.962219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.209 [2024-10-01 13:48:24.210039] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.209 [2024-10-01 13:48:24.210099] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.209 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.209 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:14.209 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.209 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.209 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.467 [2024-10-01 13:48:24.404060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.467 [2024-10-01 13:48:24.404132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.467 [2024-10-01 13:48:24.404149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.468 [2024-10-01 13:48:24.404163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.468 [2024-10-01 13:48:24.404171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.468 [2024-10-01 13:48:24.404186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.468 [2024-10-01 13:48:24.404193] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:14.468 [2024-10-01 13:48:24.404206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.468 "name": "Existed_Raid", 00:14:14.468 "uuid": "3e27c823-40e5-4767-a07f-5cf8e02f4593", 00:14:14.468 "strip_size_kb": 0, 00:14:14.468 "state": "configuring", 00:14:14.468 "raid_level": "raid1", 00:14:14.468 "superblock": true, 00:14:14.468 "num_base_bdevs": 4, 00:14:14.468 "num_base_bdevs_discovered": 0, 00:14:14.468 "num_base_bdevs_operational": 4, 00:14:14.468 "base_bdevs_list": [ 00:14:14.468 { 00:14:14.468 "name": "BaseBdev1", 00:14:14.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.468 "is_configured": false, 00:14:14.468 "data_offset": 0, 00:14:14.468 "data_size": 0 00:14:14.468 }, 00:14:14.468 { 00:14:14.468 "name": "BaseBdev2", 00:14:14.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.468 "is_configured": false, 00:14:14.468 "data_offset": 0, 00:14:14.468 "data_size": 0 00:14:14.468 }, 00:14:14.468 { 00:14:14.468 "name": "BaseBdev3", 00:14:14.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.468 "is_configured": false, 00:14:14.468 "data_offset": 0, 00:14:14.468 "data_size": 0 00:14:14.468 }, 00:14:14.468 { 00:14:14.468 "name": "BaseBdev4", 00:14:14.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.468 "is_configured": false, 00:14:14.468 "data_offset": 0, 00:14:14.468 "data_size": 0 00:14:14.468 } 00:14:14.468 ] 00:14:14.468 }' 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.468 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.725 [2024-10-01 13:48:24.839646] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.725 [2024-10-01 13:48:24.839709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.725 [2024-10-01 13:48:24.851641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.725 [2024-10-01 13:48:24.851695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.725 [2024-10-01 13:48:24.851706] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.725 [2024-10-01 13:48:24.851720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.725 [2024-10-01 13:48:24.851728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.725 [2024-10-01 13:48:24.851741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.725 [2024-10-01 13:48:24.851749] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:14.725 [2024-10-01 13:48:24.851762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.725 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.981 [2024-10-01 13:48:24.925220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.981 BaseBdev1 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.981 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.981 [ 00:14:14.981 { 00:14:14.981 "name": "BaseBdev1", 00:14:14.981 "aliases": [ 00:14:14.981 "3d065142-432e-4aad-9bb5-fd8e5bba972e" 00:14:14.981 ], 00:14:14.981 "product_name": "Malloc disk", 00:14:14.981 "block_size": 512, 00:14:14.981 "num_blocks": 65536, 00:14:14.981 "uuid": "3d065142-432e-4aad-9bb5-fd8e5bba972e", 00:14:14.981 "assigned_rate_limits": { 00:14:14.981 "rw_ios_per_sec": 0, 00:14:14.981 "rw_mbytes_per_sec": 0, 00:14:14.981 "r_mbytes_per_sec": 0, 00:14:14.981 "w_mbytes_per_sec": 0 00:14:14.981 }, 00:14:14.981 "claimed": true, 00:14:14.981 "claim_type": "exclusive_write", 00:14:14.981 "zoned": false, 00:14:14.981 "supported_io_types": { 00:14:14.981 "read": true, 00:14:14.981 "write": true, 00:14:14.981 "unmap": true, 00:14:14.981 "flush": true, 00:14:14.981 "reset": true, 00:14:14.981 "nvme_admin": false, 00:14:14.981 "nvme_io": false, 00:14:14.981 "nvme_io_md": false, 00:14:14.981 "write_zeroes": true, 00:14:14.981 "zcopy": true, 00:14:14.981 "get_zone_info": false, 00:14:14.981 "zone_management": false, 00:14:14.981 "zone_append": false, 00:14:14.981 "compare": false, 00:14:14.981 "compare_and_write": false, 00:14:14.981 "abort": true, 00:14:14.981 "seek_hole": false, 00:14:14.981 "seek_data": false, 00:14:14.981 "copy": true, 00:14:14.981 "nvme_iov_md": false 00:14:14.981 }, 00:14:14.982 "memory_domains": [ 00:14:14.982 { 00:14:14.982 "dma_device_id": "system", 00:14:14.982 "dma_device_type": 1 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.982 "dma_device_type": 2 00:14:14.982 } 00:14:14.982 ], 00:14:14.982 "driver_specific": {} 00:14:14.982 } 00:14:14.982 ] 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.982 13:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.982 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.982 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.982 "name": "Existed_Raid", 00:14:14.982 "uuid": "2eb47611-32db-4ce6-97ee-472473e7a6ff", 00:14:14.982 "strip_size_kb": 0, 00:14:14.982 "state": "configuring", 00:14:14.982 "raid_level": "raid1", 00:14:14.982 "superblock": true, 00:14:14.982 "num_base_bdevs": 4, 00:14:14.982 "num_base_bdevs_discovered": 1, 00:14:14.982 "num_base_bdevs_operational": 4, 00:14:14.982 "base_bdevs_list": [ 00:14:14.982 { 00:14:14.982 "name": "BaseBdev1", 00:14:14.982 "uuid": "3d065142-432e-4aad-9bb5-fd8e5bba972e", 00:14:14.982 "is_configured": true, 00:14:14.982 "data_offset": 2048, 00:14:14.982 "data_size": 63488 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "name": "BaseBdev2", 00:14:14.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.982 "is_configured": false, 00:14:14.982 "data_offset": 0, 00:14:14.982 "data_size": 0 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "name": "BaseBdev3", 00:14:14.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.982 "is_configured": false, 00:14:14.982 "data_offset": 0, 00:14:14.982 "data_size": 0 00:14:14.982 }, 00:14:14.982 { 00:14:14.982 "name": "BaseBdev4", 00:14:14.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.982 "is_configured": false, 00:14:14.982 "data_offset": 0, 00:14:14.982 "data_size": 0 00:14:14.982 } 00:14:14.982 ] 00:14:14.982 }' 00:14:14.982 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.982 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.239 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.239 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.239 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.239 [2024-10-01 13:48:25.428596] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.239 [2024-10-01 13:48:25.428678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.497 [2024-10-01 13:48:25.440641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.497 [2024-10-01 13:48:25.443088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.497 [2024-10-01 13:48:25.443143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.497 [2024-10-01 13:48:25.443156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.497 [2024-10-01 13:48:25.443172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.497 [2024-10-01 13:48:25.443180] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:15.497 [2024-10-01 13:48:25.443193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.497 "name": "Existed_Raid", 00:14:15.497 "uuid": "da032d56-5aa3-4d24-a055-011838a2b6e0", 00:14:15.497 "strip_size_kb": 0, 00:14:15.497 "state": "configuring", 00:14:15.497 "raid_level": "raid1", 00:14:15.497 "superblock": true, 00:14:15.497 "num_base_bdevs": 4, 00:14:15.497 "num_base_bdevs_discovered": 1, 00:14:15.497 "num_base_bdevs_operational": 4, 00:14:15.497 "base_bdevs_list": [ 00:14:15.497 { 00:14:15.497 "name": "BaseBdev1", 00:14:15.497 "uuid": "3d065142-432e-4aad-9bb5-fd8e5bba972e", 00:14:15.497 "is_configured": true, 00:14:15.497 "data_offset": 2048, 00:14:15.497 "data_size": 63488 00:14:15.497 }, 00:14:15.497 { 00:14:15.497 "name": "BaseBdev2", 00:14:15.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.497 "is_configured": false, 00:14:15.497 "data_offset": 0, 00:14:15.497 "data_size": 0 00:14:15.497 }, 00:14:15.497 { 00:14:15.497 "name": "BaseBdev3", 00:14:15.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.497 "is_configured": false, 00:14:15.497 "data_offset": 0, 00:14:15.497 "data_size": 0 00:14:15.497 }, 00:14:15.497 { 00:14:15.497 "name": "BaseBdev4", 00:14:15.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.497 "is_configured": false, 00:14:15.497 "data_offset": 0, 00:14:15.497 "data_size": 0 00:14:15.497 } 00:14:15.497 ] 00:14:15.497 }' 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.497 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.756 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.757 [2024-10-01 13:48:25.932653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.757 BaseBdev2 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.757 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.016 [ 00:14:16.016 { 00:14:16.016 "name": "BaseBdev2", 00:14:16.016 "aliases": [ 00:14:16.016 "f2985b46-2177-4d85-a9ba-f5ed780124d1" 00:14:16.016 ], 00:14:16.016 "product_name": "Malloc disk", 00:14:16.016 "block_size": 512, 00:14:16.016 "num_blocks": 65536, 00:14:16.016 "uuid": "f2985b46-2177-4d85-a9ba-f5ed780124d1", 00:14:16.016 "assigned_rate_limits": { 00:14:16.016 "rw_ios_per_sec": 0, 00:14:16.016 "rw_mbytes_per_sec": 0, 00:14:16.016 "r_mbytes_per_sec": 0, 00:14:16.016 "w_mbytes_per_sec": 0 00:14:16.016 }, 00:14:16.016 "claimed": true, 00:14:16.016 "claim_type": "exclusive_write", 00:14:16.016 "zoned": false, 00:14:16.016 "supported_io_types": { 00:14:16.016 "read": true, 00:14:16.016 "write": true, 00:14:16.016 "unmap": true, 00:14:16.016 "flush": true, 00:14:16.016 "reset": true, 00:14:16.016 "nvme_admin": false, 00:14:16.016 "nvme_io": false, 00:14:16.016 "nvme_io_md": false, 00:14:16.016 "write_zeroes": true, 00:14:16.016 "zcopy": true, 00:14:16.016 "get_zone_info": false, 00:14:16.016 "zone_management": false, 00:14:16.016 "zone_append": false, 00:14:16.016 "compare": false, 00:14:16.016 "compare_and_write": false, 00:14:16.016 "abort": true, 00:14:16.016 "seek_hole": false, 00:14:16.016 "seek_data": false, 00:14:16.016 "copy": true, 00:14:16.016 "nvme_iov_md": false 00:14:16.016 }, 00:14:16.016 "memory_domains": [ 00:14:16.016 { 00:14:16.016 "dma_device_id": "system", 00:14:16.016 "dma_device_type": 1 00:14:16.016 }, 00:14:16.016 { 00:14:16.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.016 "dma_device_type": 2 00:14:16.016 } 00:14:16.016 ], 00:14:16.016 "driver_specific": {} 00:14:16.016 } 00:14:16.016 ] 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.016 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.017 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.017 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.017 13:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.017 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.017 13:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.017 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.017 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.017 "name": "Existed_Raid", 00:14:16.017 "uuid": "da032d56-5aa3-4d24-a055-011838a2b6e0", 00:14:16.017 "strip_size_kb": 0, 00:14:16.017 "state": "configuring", 00:14:16.017 "raid_level": "raid1", 00:14:16.017 "superblock": true, 00:14:16.017 "num_base_bdevs": 4, 00:14:16.017 "num_base_bdevs_discovered": 2, 00:14:16.017 "num_base_bdevs_operational": 4, 00:14:16.017 "base_bdevs_list": [ 00:14:16.017 { 00:14:16.017 "name": "BaseBdev1", 00:14:16.017 "uuid": "3d065142-432e-4aad-9bb5-fd8e5bba972e", 00:14:16.017 "is_configured": true, 00:14:16.017 "data_offset": 2048, 00:14:16.017 "data_size": 63488 00:14:16.017 }, 00:14:16.017 { 00:14:16.017 "name": "BaseBdev2", 00:14:16.017 "uuid": "f2985b46-2177-4d85-a9ba-f5ed780124d1", 00:14:16.017 "is_configured": true, 00:14:16.017 "data_offset": 2048, 00:14:16.017 "data_size": 63488 00:14:16.017 }, 00:14:16.017 { 00:14:16.017 "name": "BaseBdev3", 00:14:16.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.017 "is_configured": false, 00:14:16.017 "data_offset": 0, 00:14:16.017 "data_size": 0 00:14:16.017 }, 00:14:16.017 { 00:14:16.017 "name": "BaseBdev4", 00:14:16.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.017 "is_configured": false, 00:14:16.017 "data_offset": 0, 00:14:16.017 "data_size": 0 00:14:16.017 } 00:14:16.017 ] 00:14:16.017 }' 00:14:16.017 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.017 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.276 [2024-10-01 13:48:26.427664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.276 BaseBdev3 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.276 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.276 [ 00:14:16.276 { 00:14:16.276 "name": "BaseBdev3", 00:14:16.276 "aliases": [ 00:14:16.276 "4775239a-acd8-45ca-a3c8-d66f0e2c9062" 00:14:16.276 ], 00:14:16.276 "product_name": "Malloc disk", 00:14:16.276 "block_size": 512, 00:14:16.276 "num_blocks": 65536, 00:14:16.276 "uuid": "4775239a-acd8-45ca-a3c8-d66f0e2c9062", 00:14:16.276 "assigned_rate_limits": { 00:14:16.276 "rw_ios_per_sec": 0, 00:14:16.276 "rw_mbytes_per_sec": 0, 00:14:16.276 "r_mbytes_per_sec": 0, 00:14:16.276 "w_mbytes_per_sec": 0 00:14:16.276 }, 00:14:16.276 "claimed": true, 00:14:16.276 "claim_type": "exclusive_write", 00:14:16.276 "zoned": false, 00:14:16.276 "supported_io_types": { 00:14:16.276 "read": true, 00:14:16.276 "write": true, 00:14:16.276 "unmap": true, 00:14:16.276 "flush": true, 00:14:16.276 "reset": true, 00:14:16.276 "nvme_admin": false, 00:14:16.276 "nvme_io": false, 00:14:16.276 "nvme_io_md": false, 00:14:16.276 "write_zeroes": true, 00:14:16.276 "zcopy": true, 00:14:16.276 "get_zone_info": false, 00:14:16.276 "zone_management": false, 00:14:16.276 "zone_append": false, 00:14:16.276 "compare": false, 00:14:16.276 "compare_and_write": false, 00:14:16.276 "abort": true, 00:14:16.276 "seek_hole": false, 00:14:16.276 "seek_data": false, 00:14:16.276 "copy": true, 00:14:16.276 "nvme_iov_md": false 00:14:16.276 }, 00:14:16.276 "memory_domains": [ 00:14:16.276 { 00:14:16.276 "dma_device_id": "system", 00:14:16.276 "dma_device_type": 1 00:14:16.276 }, 00:14:16.276 { 00:14:16.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.276 "dma_device_type": 2 00:14:16.276 } 00:14:16.535 ], 00:14:16.535 "driver_specific": {} 00:14:16.535 } 00:14:16.535 ] 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.535 "name": "Existed_Raid", 00:14:16.535 "uuid": "da032d56-5aa3-4d24-a055-011838a2b6e0", 00:14:16.535 "strip_size_kb": 0, 00:14:16.535 "state": "configuring", 00:14:16.535 "raid_level": "raid1", 00:14:16.535 "superblock": true, 00:14:16.535 "num_base_bdevs": 4, 00:14:16.535 "num_base_bdevs_discovered": 3, 00:14:16.535 "num_base_bdevs_operational": 4, 00:14:16.535 "base_bdevs_list": [ 00:14:16.535 { 00:14:16.535 "name": "BaseBdev1", 00:14:16.535 "uuid": "3d065142-432e-4aad-9bb5-fd8e5bba972e", 00:14:16.535 "is_configured": true, 00:14:16.535 "data_offset": 2048, 00:14:16.535 "data_size": 63488 00:14:16.535 }, 00:14:16.535 { 00:14:16.535 "name": "BaseBdev2", 00:14:16.535 "uuid": "f2985b46-2177-4d85-a9ba-f5ed780124d1", 00:14:16.535 "is_configured": true, 00:14:16.535 "data_offset": 2048, 00:14:16.535 "data_size": 63488 00:14:16.535 }, 00:14:16.535 { 00:14:16.535 "name": "BaseBdev3", 00:14:16.535 "uuid": "4775239a-acd8-45ca-a3c8-d66f0e2c9062", 00:14:16.535 "is_configured": true, 00:14:16.535 "data_offset": 2048, 00:14:16.535 "data_size": 63488 00:14:16.535 }, 00:14:16.535 { 00:14:16.535 "name": "BaseBdev4", 00:14:16.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.535 "is_configured": false, 00:14:16.535 "data_offset": 0, 00:14:16.535 "data_size": 0 00:14:16.535 } 00:14:16.535 ] 00:14:16.535 }' 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.535 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.794 [2024-10-01 13:48:26.903365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.794 [2024-10-01 13:48:26.903768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:16.794 [2024-10-01 13:48:26.903786] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:16.794 [2024-10-01 13:48:26.904134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:16.794 [2024-10-01 13:48:26.904320] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:16.794 [2024-10-01 13:48:26.904339] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:16.794 [2024-10-01 13:48:26.904526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.794 BaseBdev4 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.794 [ 00:14:16.794 { 00:14:16.794 "name": "BaseBdev4", 00:14:16.794 "aliases": [ 00:14:16.794 "054eb18a-7ed1-4297-a137-19c17ab05b86" 00:14:16.794 ], 00:14:16.794 "product_name": "Malloc disk", 00:14:16.794 "block_size": 512, 00:14:16.794 "num_blocks": 65536, 00:14:16.794 "uuid": "054eb18a-7ed1-4297-a137-19c17ab05b86", 00:14:16.794 "assigned_rate_limits": { 00:14:16.794 "rw_ios_per_sec": 0, 00:14:16.794 "rw_mbytes_per_sec": 0, 00:14:16.794 "r_mbytes_per_sec": 0, 00:14:16.794 "w_mbytes_per_sec": 0 00:14:16.794 }, 00:14:16.794 "claimed": true, 00:14:16.794 "claim_type": "exclusive_write", 00:14:16.794 "zoned": false, 00:14:16.794 "supported_io_types": { 00:14:16.794 "read": true, 00:14:16.794 "write": true, 00:14:16.794 "unmap": true, 00:14:16.794 "flush": true, 00:14:16.794 "reset": true, 00:14:16.794 "nvme_admin": false, 00:14:16.794 "nvme_io": false, 00:14:16.794 "nvme_io_md": false, 00:14:16.794 "write_zeroes": true, 00:14:16.794 "zcopy": true, 00:14:16.794 "get_zone_info": false, 00:14:16.794 "zone_management": false, 00:14:16.794 "zone_append": false, 00:14:16.794 "compare": false, 00:14:16.794 "compare_and_write": false, 00:14:16.794 "abort": true, 00:14:16.794 "seek_hole": false, 00:14:16.794 "seek_data": false, 00:14:16.794 "copy": true, 00:14:16.794 "nvme_iov_md": false 00:14:16.794 }, 00:14:16.794 "memory_domains": [ 00:14:16.794 { 00:14:16.794 "dma_device_id": "system", 00:14:16.794 "dma_device_type": 1 00:14:16.794 }, 00:14:16.794 { 00:14:16.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.794 "dma_device_type": 2 00:14:16.794 } 00:14:16.794 ], 00:14:16.794 "driver_specific": {} 00:14:16.794 } 00:14:16.794 ] 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.794 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.052 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.052 "name": "Existed_Raid", 00:14:17.052 "uuid": "da032d56-5aa3-4d24-a055-011838a2b6e0", 00:14:17.052 "strip_size_kb": 0, 00:14:17.052 "state": "online", 00:14:17.052 "raid_level": "raid1", 00:14:17.052 "superblock": true, 00:14:17.052 "num_base_bdevs": 4, 00:14:17.052 "num_base_bdevs_discovered": 4, 00:14:17.052 "num_base_bdevs_operational": 4, 00:14:17.052 "base_bdevs_list": [ 00:14:17.052 { 00:14:17.053 "name": "BaseBdev1", 00:14:17.053 "uuid": "3d065142-432e-4aad-9bb5-fd8e5bba972e", 00:14:17.053 "is_configured": true, 00:14:17.053 "data_offset": 2048, 00:14:17.053 "data_size": 63488 00:14:17.053 }, 00:14:17.053 { 00:14:17.053 "name": "BaseBdev2", 00:14:17.053 "uuid": "f2985b46-2177-4d85-a9ba-f5ed780124d1", 00:14:17.053 "is_configured": true, 00:14:17.053 "data_offset": 2048, 00:14:17.053 "data_size": 63488 00:14:17.053 }, 00:14:17.053 { 00:14:17.053 "name": "BaseBdev3", 00:14:17.053 "uuid": "4775239a-acd8-45ca-a3c8-d66f0e2c9062", 00:14:17.053 "is_configured": true, 00:14:17.053 "data_offset": 2048, 00:14:17.053 "data_size": 63488 00:14:17.053 }, 00:14:17.053 { 00:14:17.053 "name": "BaseBdev4", 00:14:17.053 "uuid": "054eb18a-7ed1-4297-a137-19c17ab05b86", 00:14:17.053 "is_configured": true, 00:14:17.053 "data_offset": 2048, 00:14:17.053 "data_size": 63488 00:14:17.053 } 00:14:17.053 ] 00:14:17.053 }' 00:14:17.053 13:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.053 13:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.311 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:17.311 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:17.311 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:17.311 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:17.311 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:17.311 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.312 [2024-10-01 13:48:27.395108] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:17.312 "name": "Existed_Raid", 00:14:17.312 "aliases": [ 00:14:17.312 "da032d56-5aa3-4d24-a055-011838a2b6e0" 00:14:17.312 ], 00:14:17.312 "product_name": "Raid Volume", 00:14:17.312 "block_size": 512, 00:14:17.312 "num_blocks": 63488, 00:14:17.312 "uuid": "da032d56-5aa3-4d24-a055-011838a2b6e0", 00:14:17.312 "assigned_rate_limits": { 00:14:17.312 "rw_ios_per_sec": 0, 00:14:17.312 "rw_mbytes_per_sec": 0, 00:14:17.312 "r_mbytes_per_sec": 0, 00:14:17.312 "w_mbytes_per_sec": 0 00:14:17.312 }, 00:14:17.312 "claimed": false, 00:14:17.312 "zoned": false, 00:14:17.312 "supported_io_types": { 00:14:17.312 "read": true, 00:14:17.312 "write": true, 00:14:17.312 "unmap": false, 00:14:17.312 "flush": false, 00:14:17.312 "reset": true, 00:14:17.312 "nvme_admin": false, 00:14:17.312 "nvme_io": false, 00:14:17.312 "nvme_io_md": false, 00:14:17.312 "write_zeroes": true, 00:14:17.312 "zcopy": false, 00:14:17.312 "get_zone_info": false, 00:14:17.312 "zone_management": false, 00:14:17.312 "zone_append": false, 00:14:17.312 "compare": false, 00:14:17.312 "compare_and_write": false, 00:14:17.312 "abort": false, 00:14:17.312 "seek_hole": false, 00:14:17.312 "seek_data": false, 00:14:17.312 "copy": false, 00:14:17.312 "nvme_iov_md": false 00:14:17.312 }, 00:14:17.312 "memory_domains": [ 00:14:17.312 { 00:14:17.312 "dma_device_id": "system", 00:14:17.312 "dma_device_type": 1 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.312 "dma_device_type": 2 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "dma_device_id": "system", 00:14:17.312 "dma_device_type": 1 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.312 "dma_device_type": 2 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "dma_device_id": "system", 00:14:17.312 "dma_device_type": 1 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.312 "dma_device_type": 2 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "dma_device_id": "system", 00:14:17.312 "dma_device_type": 1 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.312 "dma_device_type": 2 00:14:17.312 } 00:14:17.312 ], 00:14:17.312 "driver_specific": { 00:14:17.312 "raid": { 00:14:17.312 "uuid": "da032d56-5aa3-4d24-a055-011838a2b6e0", 00:14:17.312 "strip_size_kb": 0, 00:14:17.312 "state": "online", 00:14:17.312 "raid_level": "raid1", 00:14:17.312 "superblock": true, 00:14:17.312 "num_base_bdevs": 4, 00:14:17.312 "num_base_bdevs_discovered": 4, 00:14:17.312 "num_base_bdevs_operational": 4, 00:14:17.312 "base_bdevs_list": [ 00:14:17.312 { 00:14:17.312 "name": "BaseBdev1", 00:14:17.312 "uuid": "3d065142-432e-4aad-9bb5-fd8e5bba972e", 00:14:17.312 "is_configured": true, 00:14:17.312 "data_offset": 2048, 00:14:17.312 "data_size": 63488 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "name": "BaseBdev2", 00:14:17.312 "uuid": "f2985b46-2177-4d85-a9ba-f5ed780124d1", 00:14:17.312 "is_configured": true, 00:14:17.312 "data_offset": 2048, 00:14:17.312 "data_size": 63488 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "name": "BaseBdev3", 00:14:17.312 "uuid": "4775239a-acd8-45ca-a3c8-d66f0e2c9062", 00:14:17.312 "is_configured": true, 00:14:17.312 "data_offset": 2048, 00:14:17.312 "data_size": 63488 00:14:17.312 }, 00:14:17.312 { 00:14:17.312 "name": "BaseBdev4", 00:14:17.312 "uuid": "054eb18a-7ed1-4297-a137-19c17ab05b86", 00:14:17.312 "is_configured": true, 00:14:17.312 "data_offset": 2048, 00:14:17.312 "data_size": 63488 00:14:17.312 } 00:14:17.312 ] 00:14:17.312 } 00:14:17.312 } 00:14:17.312 }' 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:17.312 BaseBdev2 00:14:17.312 BaseBdev3 00:14:17.312 BaseBdev4' 00:14:17.312 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.573 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.574 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.574 [2024-10-01 13:48:27.722634] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.833 "name": "Existed_Raid", 00:14:17.833 "uuid": "da032d56-5aa3-4d24-a055-011838a2b6e0", 00:14:17.833 "strip_size_kb": 0, 00:14:17.833 "state": "online", 00:14:17.833 "raid_level": "raid1", 00:14:17.833 "superblock": true, 00:14:17.833 "num_base_bdevs": 4, 00:14:17.833 "num_base_bdevs_discovered": 3, 00:14:17.833 "num_base_bdevs_operational": 3, 00:14:17.833 "base_bdevs_list": [ 00:14:17.833 { 00:14:17.833 "name": null, 00:14:17.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.833 "is_configured": false, 00:14:17.833 "data_offset": 0, 00:14:17.833 "data_size": 63488 00:14:17.833 }, 00:14:17.833 { 00:14:17.833 "name": "BaseBdev2", 00:14:17.833 "uuid": "f2985b46-2177-4d85-a9ba-f5ed780124d1", 00:14:17.833 "is_configured": true, 00:14:17.833 "data_offset": 2048, 00:14:17.833 "data_size": 63488 00:14:17.833 }, 00:14:17.833 { 00:14:17.833 "name": "BaseBdev3", 00:14:17.833 "uuid": "4775239a-acd8-45ca-a3c8-d66f0e2c9062", 00:14:17.833 "is_configured": true, 00:14:17.833 "data_offset": 2048, 00:14:17.833 "data_size": 63488 00:14:17.833 }, 00:14:17.833 { 00:14:17.833 "name": "BaseBdev4", 00:14:17.833 "uuid": "054eb18a-7ed1-4297-a137-19c17ab05b86", 00:14:17.833 "is_configured": true, 00:14:17.833 "data_offset": 2048, 00:14:17.833 "data_size": 63488 00:14:17.833 } 00:14:17.833 ] 00:14:17.833 }' 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.833 13:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.092 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:18.092 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.092 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.092 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.092 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.092 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.350 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.350 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.350 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.351 [2024-10-01 13:48:28.304897] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.351 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.351 [2024-10-01 13:48:28.455570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.609 [2024-10-01 13:48:28.625666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:18.609 [2024-10-01 13:48:28.625805] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.609 [2024-10-01 13:48:28.733232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.609 [2024-10-01 13:48:28.733316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.609 [2024-10-01 13:48:28.733336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.609 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.869 BaseBdev2 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.869 [ 00:14:18.869 { 00:14:18.869 "name": "BaseBdev2", 00:14:18.869 "aliases": [ 00:14:18.869 "5ad49453-1f2e-488f-a556-bf419f078f16" 00:14:18.869 ], 00:14:18.869 "product_name": "Malloc disk", 00:14:18.869 "block_size": 512, 00:14:18.869 "num_blocks": 65536, 00:14:18.869 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:18.869 "assigned_rate_limits": { 00:14:18.869 "rw_ios_per_sec": 0, 00:14:18.869 "rw_mbytes_per_sec": 0, 00:14:18.869 "r_mbytes_per_sec": 0, 00:14:18.869 "w_mbytes_per_sec": 0 00:14:18.869 }, 00:14:18.869 "claimed": false, 00:14:18.869 "zoned": false, 00:14:18.869 "supported_io_types": { 00:14:18.869 "read": true, 00:14:18.869 "write": true, 00:14:18.869 "unmap": true, 00:14:18.869 "flush": true, 00:14:18.869 "reset": true, 00:14:18.869 "nvme_admin": false, 00:14:18.869 "nvme_io": false, 00:14:18.869 "nvme_io_md": false, 00:14:18.869 "write_zeroes": true, 00:14:18.869 "zcopy": true, 00:14:18.869 "get_zone_info": false, 00:14:18.869 "zone_management": false, 00:14:18.869 "zone_append": false, 00:14:18.869 "compare": false, 00:14:18.869 "compare_and_write": false, 00:14:18.869 "abort": true, 00:14:18.869 "seek_hole": false, 00:14:18.869 "seek_data": false, 00:14:18.869 "copy": true, 00:14:18.869 "nvme_iov_md": false 00:14:18.869 }, 00:14:18.869 "memory_domains": [ 00:14:18.869 { 00:14:18.869 "dma_device_id": "system", 00:14:18.869 "dma_device_type": 1 00:14:18.869 }, 00:14:18.869 { 00:14:18.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.869 "dma_device_type": 2 00:14:18.869 } 00:14:18.869 ], 00:14:18.869 "driver_specific": {} 00:14:18.869 } 00:14:18.869 ] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.869 BaseBdev3 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.869 [ 00:14:18.869 { 00:14:18.869 "name": "BaseBdev3", 00:14:18.869 "aliases": [ 00:14:18.869 "e352434d-433f-48a4-8ddb-5671de5c7094" 00:14:18.869 ], 00:14:18.869 "product_name": "Malloc disk", 00:14:18.869 "block_size": 512, 00:14:18.869 "num_blocks": 65536, 00:14:18.869 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:18.869 "assigned_rate_limits": { 00:14:18.869 "rw_ios_per_sec": 0, 00:14:18.869 "rw_mbytes_per_sec": 0, 00:14:18.869 "r_mbytes_per_sec": 0, 00:14:18.869 "w_mbytes_per_sec": 0 00:14:18.869 }, 00:14:18.869 "claimed": false, 00:14:18.869 "zoned": false, 00:14:18.869 "supported_io_types": { 00:14:18.869 "read": true, 00:14:18.869 "write": true, 00:14:18.869 "unmap": true, 00:14:18.869 "flush": true, 00:14:18.869 "reset": true, 00:14:18.869 "nvme_admin": false, 00:14:18.869 "nvme_io": false, 00:14:18.869 "nvme_io_md": false, 00:14:18.869 "write_zeroes": true, 00:14:18.869 "zcopy": true, 00:14:18.869 "get_zone_info": false, 00:14:18.869 "zone_management": false, 00:14:18.869 "zone_append": false, 00:14:18.869 "compare": false, 00:14:18.869 "compare_and_write": false, 00:14:18.869 "abort": true, 00:14:18.869 "seek_hole": false, 00:14:18.869 "seek_data": false, 00:14:18.869 "copy": true, 00:14:18.869 "nvme_iov_md": false 00:14:18.869 }, 00:14:18.869 "memory_domains": [ 00:14:18.869 { 00:14:18.869 "dma_device_id": "system", 00:14:18.869 "dma_device_type": 1 00:14:18.869 }, 00:14:18.869 { 00:14:18.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.869 "dma_device_type": 2 00:14:18.869 } 00:14:18.869 ], 00:14:18.869 "driver_specific": {} 00:14:18.869 } 00:14:18.869 ] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.869 13:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.869 BaseBdev4 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.869 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.870 [ 00:14:18.870 { 00:14:18.870 "name": "BaseBdev4", 00:14:18.870 "aliases": [ 00:14:18.870 "e1f22e95-699e-45d2-88e7-8f80aec150f0" 00:14:18.870 ], 00:14:18.870 "product_name": "Malloc disk", 00:14:18.870 "block_size": 512, 00:14:18.870 "num_blocks": 65536, 00:14:18.870 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:18.870 "assigned_rate_limits": { 00:14:18.870 "rw_ios_per_sec": 0, 00:14:18.870 "rw_mbytes_per_sec": 0, 00:14:18.870 "r_mbytes_per_sec": 0, 00:14:18.870 "w_mbytes_per_sec": 0 00:14:18.870 }, 00:14:18.870 "claimed": false, 00:14:18.870 "zoned": false, 00:14:18.870 "supported_io_types": { 00:14:18.870 "read": true, 00:14:18.870 "write": true, 00:14:18.870 "unmap": true, 00:14:18.870 "flush": true, 00:14:18.870 "reset": true, 00:14:18.870 "nvme_admin": false, 00:14:18.870 "nvme_io": false, 00:14:18.870 "nvme_io_md": false, 00:14:18.870 "write_zeroes": true, 00:14:18.870 "zcopy": true, 00:14:18.870 "get_zone_info": false, 00:14:18.870 "zone_management": false, 00:14:18.870 "zone_append": false, 00:14:18.870 "compare": false, 00:14:18.870 "compare_and_write": false, 00:14:18.870 "abort": true, 00:14:18.870 "seek_hole": false, 00:14:18.870 "seek_data": false, 00:14:18.870 "copy": true, 00:14:18.870 "nvme_iov_md": false 00:14:18.870 }, 00:14:18.870 "memory_domains": [ 00:14:18.870 { 00:14:18.870 "dma_device_id": "system", 00:14:18.870 "dma_device_type": 1 00:14:18.870 }, 00:14:18.870 { 00:14:18.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.870 "dma_device_type": 2 00:14:18.870 } 00:14:18.870 ], 00:14:18.870 "driver_specific": {} 00:14:18.870 } 00:14:18.870 ] 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.870 [2024-10-01 13:48:29.053329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.870 [2024-10-01 13:48:29.053392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.870 [2024-10-01 13:48:29.053431] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.870 [2024-10-01 13:48:29.055868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.870 [2024-10-01 13:48:29.055921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.870 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.129 "name": "Existed_Raid", 00:14:19.129 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:19.129 "strip_size_kb": 0, 00:14:19.129 "state": "configuring", 00:14:19.129 "raid_level": "raid1", 00:14:19.129 "superblock": true, 00:14:19.129 "num_base_bdevs": 4, 00:14:19.129 "num_base_bdevs_discovered": 3, 00:14:19.129 "num_base_bdevs_operational": 4, 00:14:19.129 "base_bdevs_list": [ 00:14:19.129 { 00:14:19.129 "name": "BaseBdev1", 00:14:19.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.129 "is_configured": false, 00:14:19.129 "data_offset": 0, 00:14:19.129 "data_size": 0 00:14:19.129 }, 00:14:19.129 { 00:14:19.129 "name": "BaseBdev2", 00:14:19.129 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:19.129 "is_configured": true, 00:14:19.129 "data_offset": 2048, 00:14:19.129 "data_size": 63488 00:14:19.129 }, 00:14:19.129 { 00:14:19.129 "name": "BaseBdev3", 00:14:19.129 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:19.129 "is_configured": true, 00:14:19.129 "data_offset": 2048, 00:14:19.129 "data_size": 63488 00:14:19.129 }, 00:14:19.129 { 00:14:19.129 "name": "BaseBdev4", 00:14:19.129 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:19.129 "is_configured": true, 00:14:19.129 "data_offset": 2048, 00:14:19.129 "data_size": 63488 00:14:19.129 } 00:14:19.129 ] 00:14:19.129 }' 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.129 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.387 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:19.387 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.387 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.387 [2024-10-01 13:48:29.508698] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.387 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.387 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.388 "name": "Existed_Raid", 00:14:19.388 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:19.388 "strip_size_kb": 0, 00:14:19.388 "state": "configuring", 00:14:19.388 "raid_level": "raid1", 00:14:19.388 "superblock": true, 00:14:19.388 "num_base_bdevs": 4, 00:14:19.388 "num_base_bdevs_discovered": 2, 00:14:19.388 "num_base_bdevs_operational": 4, 00:14:19.388 "base_bdevs_list": [ 00:14:19.388 { 00:14:19.388 "name": "BaseBdev1", 00:14:19.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.388 "is_configured": false, 00:14:19.388 "data_offset": 0, 00:14:19.388 "data_size": 0 00:14:19.388 }, 00:14:19.388 { 00:14:19.388 "name": null, 00:14:19.388 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:19.388 "is_configured": false, 00:14:19.388 "data_offset": 0, 00:14:19.388 "data_size": 63488 00:14:19.388 }, 00:14:19.388 { 00:14:19.388 "name": "BaseBdev3", 00:14:19.388 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:19.388 "is_configured": true, 00:14:19.388 "data_offset": 2048, 00:14:19.388 "data_size": 63488 00:14:19.388 }, 00:14:19.388 { 00:14:19.388 "name": "BaseBdev4", 00:14:19.388 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:19.388 "is_configured": true, 00:14:19.388 "data_offset": 2048, 00:14:19.388 "data_size": 63488 00:14:19.388 } 00:14:19.388 ] 00:14:19.388 }' 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.388 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.955 13:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 [2024-10-01 13:48:30.044946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.955 BaseBdev1 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 [ 00:14:19.955 { 00:14:19.955 "name": "BaseBdev1", 00:14:19.955 "aliases": [ 00:14:19.955 "3c894d0c-3a97-4604-b2fe-0498941e1f9e" 00:14:19.955 ], 00:14:19.955 "product_name": "Malloc disk", 00:14:19.955 "block_size": 512, 00:14:19.955 "num_blocks": 65536, 00:14:19.955 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:19.955 "assigned_rate_limits": { 00:14:19.955 "rw_ios_per_sec": 0, 00:14:19.955 "rw_mbytes_per_sec": 0, 00:14:19.955 "r_mbytes_per_sec": 0, 00:14:19.955 "w_mbytes_per_sec": 0 00:14:19.955 }, 00:14:19.955 "claimed": true, 00:14:19.955 "claim_type": "exclusive_write", 00:14:19.955 "zoned": false, 00:14:19.955 "supported_io_types": { 00:14:19.955 "read": true, 00:14:19.955 "write": true, 00:14:19.955 "unmap": true, 00:14:19.955 "flush": true, 00:14:19.955 "reset": true, 00:14:19.955 "nvme_admin": false, 00:14:19.955 "nvme_io": false, 00:14:19.955 "nvme_io_md": false, 00:14:19.955 "write_zeroes": true, 00:14:19.955 "zcopy": true, 00:14:19.955 "get_zone_info": false, 00:14:19.955 "zone_management": false, 00:14:19.955 "zone_append": false, 00:14:19.955 "compare": false, 00:14:19.955 "compare_and_write": false, 00:14:19.955 "abort": true, 00:14:19.955 "seek_hole": false, 00:14:19.955 "seek_data": false, 00:14:19.955 "copy": true, 00:14:19.955 "nvme_iov_md": false 00:14:19.955 }, 00:14:19.955 "memory_domains": [ 00:14:19.955 { 00:14:19.955 "dma_device_id": "system", 00:14:19.955 "dma_device_type": 1 00:14:19.955 }, 00:14:19.955 { 00:14:19.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.955 "dma_device_type": 2 00:14:19.955 } 00:14:19.955 ], 00:14:19.955 "driver_specific": {} 00:14:19.955 } 00:14:19.955 ] 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.955 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.955 "name": "Existed_Raid", 00:14:19.955 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:19.955 "strip_size_kb": 0, 00:14:19.955 "state": "configuring", 00:14:19.955 "raid_level": "raid1", 00:14:19.955 "superblock": true, 00:14:19.955 "num_base_bdevs": 4, 00:14:19.955 "num_base_bdevs_discovered": 3, 00:14:19.955 "num_base_bdevs_operational": 4, 00:14:19.955 "base_bdevs_list": [ 00:14:19.955 { 00:14:19.955 "name": "BaseBdev1", 00:14:19.955 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:19.955 "is_configured": true, 00:14:19.955 "data_offset": 2048, 00:14:19.955 "data_size": 63488 00:14:19.956 }, 00:14:19.956 { 00:14:19.956 "name": null, 00:14:19.956 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:19.956 "is_configured": false, 00:14:19.956 "data_offset": 0, 00:14:19.956 "data_size": 63488 00:14:19.956 }, 00:14:19.956 { 00:14:19.956 "name": "BaseBdev3", 00:14:19.956 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:19.956 "is_configured": true, 00:14:19.956 "data_offset": 2048, 00:14:19.956 "data_size": 63488 00:14:19.956 }, 00:14:19.956 { 00:14:19.956 "name": "BaseBdev4", 00:14:19.956 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:19.956 "is_configured": true, 00:14:19.956 "data_offset": 2048, 00:14:19.956 "data_size": 63488 00:14:19.956 } 00:14:19.956 ] 00:14:19.956 }' 00:14:19.956 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.956 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.523 [2024-10-01 13:48:30.588445] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.523 "name": "Existed_Raid", 00:14:20.523 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:20.523 "strip_size_kb": 0, 00:14:20.523 "state": "configuring", 00:14:20.523 "raid_level": "raid1", 00:14:20.523 "superblock": true, 00:14:20.523 "num_base_bdevs": 4, 00:14:20.523 "num_base_bdevs_discovered": 2, 00:14:20.523 "num_base_bdevs_operational": 4, 00:14:20.523 "base_bdevs_list": [ 00:14:20.523 { 00:14:20.523 "name": "BaseBdev1", 00:14:20.523 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:20.523 "is_configured": true, 00:14:20.523 "data_offset": 2048, 00:14:20.523 "data_size": 63488 00:14:20.523 }, 00:14:20.523 { 00:14:20.523 "name": null, 00:14:20.523 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:20.523 "is_configured": false, 00:14:20.523 "data_offset": 0, 00:14:20.523 "data_size": 63488 00:14:20.523 }, 00:14:20.523 { 00:14:20.523 "name": null, 00:14:20.523 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:20.523 "is_configured": false, 00:14:20.523 "data_offset": 0, 00:14:20.523 "data_size": 63488 00:14:20.523 }, 00:14:20.523 { 00:14:20.523 "name": "BaseBdev4", 00:14:20.523 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:20.523 "is_configured": true, 00:14:20.523 "data_offset": 2048, 00:14:20.523 "data_size": 63488 00:14:20.523 } 00:14:20.523 ] 00:14:20.523 }' 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.523 13:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.090 [2024-10-01 13:48:31.063758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.090 "name": "Existed_Raid", 00:14:21.090 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:21.090 "strip_size_kb": 0, 00:14:21.090 "state": "configuring", 00:14:21.090 "raid_level": "raid1", 00:14:21.090 "superblock": true, 00:14:21.090 "num_base_bdevs": 4, 00:14:21.090 "num_base_bdevs_discovered": 3, 00:14:21.090 "num_base_bdevs_operational": 4, 00:14:21.090 "base_bdevs_list": [ 00:14:21.090 { 00:14:21.090 "name": "BaseBdev1", 00:14:21.090 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:21.090 "is_configured": true, 00:14:21.090 "data_offset": 2048, 00:14:21.090 "data_size": 63488 00:14:21.090 }, 00:14:21.090 { 00:14:21.090 "name": null, 00:14:21.090 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:21.090 "is_configured": false, 00:14:21.090 "data_offset": 0, 00:14:21.090 "data_size": 63488 00:14:21.090 }, 00:14:21.090 { 00:14:21.090 "name": "BaseBdev3", 00:14:21.090 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:21.090 "is_configured": true, 00:14:21.090 "data_offset": 2048, 00:14:21.090 "data_size": 63488 00:14:21.090 }, 00:14:21.090 { 00:14:21.090 "name": "BaseBdev4", 00:14:21.090 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:21.090 "is_configured": true, 00:14:21.090 "data_offset": 2048, 00:14:21.090 "data_size": 63488 00:14:21.090 } 00:14:21.090 ] 00:14:21.090 }' 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.090 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.349 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.349 [2024-10-01 13:48:31.511554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.607 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.607 "name": "Existed_Raid", 00:14:21.607 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:21.607 "strip_size_kb": 0, 00:14:21.607 "state": "configuring", 00:14:21.607 "raid_level": "raid1", 00:14:21.607 "superblock": true, 00:14:21.607 "num_base_bdevs": 4, 00:14:21.607 "num_base_bdevs_discovered": 2, 00:14:21.607 "num_base_bdevs_operational": 4, 00:14:21.607 "base_bdevs_list": [ 00:14:21.607 { 00:14:21.607 "name": null, 00:14:21.607 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:21.607 "is_configured": false, 00:14:21.607 "data_offset": 0, 00:14:21.607 "data_size": 63488 00:14:21.607 }, 00:14:21.607 { 00:14:21.607 "name": null, 00:14:21.607 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:21.607 "is_configured": false, 00:14:21.607 "data_offset": 0, 00:14:21.607 "data_size": 63488 00:14:21.607 }, 00:14:21.607 { 00:14:21.607 "name": "BaseBdev3", 00:14:21.607 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:21.607 "is_configured": true, 00:14:21.607 "data_offset": 2048, 00:14:21.607 "data_size": 63488 00:14:21.607 }, 00:14:21.607 { 00:14:21.608 "name": "BaseBdev4", 00:14:21.608 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:21.608 "is_configured": true, 00:14:21.608 "data_offset": 2048, 00:14:21.608 "data_size": 63488 00:14:21.608 } 00:14:21.608 ] 00:14:21.608 }' 00:14:21.608 13:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.608 13:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.175 [2024-10-01 13:48:32.133264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.175 "name": "Existed_Raid", 00:14:22.175 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:22.175 "strip_size_kb": 0, 00:14:22.175 "state": "configuring", 00:14:22.175 "raid_level": "raid1", 00:14:22.175 "superblock": true, 00:14:22.175 "num_base_bdevs": 4, 00:14:22.175 "num_base_bdevs_discovered": 3, 00:14:22.175 "num_base_bdevs_operational": 4, 00:14:22.175 "base_bdevs_list": [ 00:14:22.175 { 00:14:22.175 "name": null, 00:14:22.175 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:22.175 "is_configured": false, 00:14:22.175 "data_offset": 0, 00:14:22.175 "data_size": 63488 00:14:22.175 }, 00:14:22.175 { 00:14:22.175 "name": "BaseBdev2", 00:14:22.175 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:22.175 "is_configured": true, 00:14:22.175 "data_offset": 2048, 00:14:22.175 "data_size": 63488 00:14:22.175 }, 00:14:22.175 { 00:14:22.175 "name": "BaseBdev3", 00:14:22.175 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:22.175 "is_configured": true, 00:14:22.175 "data_offset": 2048, 00:14:22.175 "data_size": 63488 00:14:22.175 }, 00:14:22.175 { 00:14:22.175 "name": "BaseBdev4", 00:14:22.175 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:22.175 "is_configured": true, 00:14:22.175 "data_offset": 2048, 00:14:22.175 "data_size": 63488 00:14:22.175 } 00:14:22.175 ] 00:14:22.175 }' 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.175 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.434 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.435 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:22.435 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.435 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3c894d0c-3a97-4604-b2fe-0498941e1f9e 00:14:22.435 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.435 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.694 [2024-10-01 13:48:32.668853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:22.694 [2024-10-01 13:48:32.669133] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:22.694 [2024-10-01 13:48:32.669154] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.694 NewBaseBdev 00:14:22.694 [2024-10-01 13:48:32.669474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:22.694 [2024-10-01 13:48:32.669659] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:22.694 [2024-10-01 13:48:32.669669] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:22.694 [2024-10-01 13:48:32.669830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.694 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.694 [ 00:14:22.694 { 00:14:22.694 "name": "NewBaseBdev", 00:14:22.694 "aliases": [ 00:14:22.694 "3c894d0c-3a97-4604-b2fe-0498941e1f9e" 00:14:22.694 ], 00:14:22.694 "product_name": "Malloc disk", 00:14:22.694 "block_size": 512, 00:14:22.694 "num_blocks": 65536, 00:14:22.694 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:22.694 "assigned_rate_limits": { 00:14:22.694 "rw_ios_per_sec": 0, 00:14:22.694 "rw_mbytes_per_sec": 0, 00:14:22.694 "r_mbytes_per_sec": 0, 00:14:22.694 "w_mbytes_per_sec": 0 00:14:22.694 }, 00:14:22.694 "claimed": true, 00:14:22.694 "claim_type": "exclusive_write", 00:14:22.694 "zoned": false, 00:14:22.694 "supported_io_types": { 00:14:22.694 "read": true, 00:14:22.695 "write": true, 00:14:22.695 "unmap": true, 00:14:22.695 "flush": true, 00:14:22.695 "reset": true, 00:14:22.695 "nvme_admin": false, 00:14:22.695 "nvme_io": false, 00:14:22.695 "nvme_io_md": false, 00:14:22.695 "write_zeroes": true, 00:14:22.695 "zcopy": true, 00:14:22.695 "get_zone_info": false, 00:14:22.695 "zone_management": false, 00:14:22.695 "zone_append": false, 00:14:22.695 "compare": false, 00:14:22.695 "compare_and_write": false, 00:14:22.695 "abort": true, 00:14:22.695 "seek_hole": false, 00:14:22.695 "seek_data": false, 00:14:22.695 "copy": true, 00:14:22.695 "nvme_iov_md": false 00:14:22.695 }, 00:14:22.695 "memory_domains": [ 00:14:22.695 { 00:14:22.695 "dma_device_id": "system", 00:14:22.695 "dma_device_type": 1 00:14:22.695 }, 00:14:22.695 { 00:14:22.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.695 "dma_device_type": 2 00:14:22.695 } 00:14:22.695 ], 00:14:22.695 "driver_specific": {} 00:14:22.695 } 00:14:22.695 ] 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.695 "name": "Existed_Raid", 00:14:22.695 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:22.695 "strip_size_kb": 0, 00:14:22.695 "state": "online", 00:14:22.695 "raid_level": "raid1", 00:14:22.695 "superblock": true, 00:14:22.695 "num_base_bdevs": 4, 00:14:22.695 "num_base_bdevs_discovered": 4, 00:14:22.695 "num_base_bdevs_operational": 4, 00:14:22.695 "base_bdevs_list": [ 00:14:22.695 { 00:14:22.695 "name": "NewBaseBdev", 00:14:22.695 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 2048, 00:14:22.695 "data_size": 63488 00:14:22.695 }, 00:14:22.695 { 00:14:22.695 "name": "BaseBdev2", 00:14:22.695 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 2048, 00:14:22.695 "data_size": 63488 00:14:22.695 }, 00:14:22.695 { 00:14:22.695 "name": "BaseBdev3", 00:14:22.695 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 2048, 00:14:22.695 "data_size": 63488 00:14:22.695 }, 00:14:22.695 { 00:14:22.695 "name": "BaseBdev4", 00:14:22.695 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 2048, 00:14:22.695 "data_size": 63488 00:14:22.695 } 00:14:22.695 ] 00:14:22.695 }' 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.695 13:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.955 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:22.955 [2024-10-01 13:48:33.144932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.214 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.214 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.214 "name": "Existed_Raid", 00:14:23.214 "aliases": [ 00:14:23.214 "5e441ac7-ae8c-418f-8de6-710352319fcc" 00:14:23.214 ], 00:14:23.214 "product_name": "Raid Volume", 00:14:23.214 "block_size": 512, 00:14:23.214 "num_blocks": 63488, 00:14:23.214 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:23.214 "assigned_rate_limits": { 00:14:23.214 "rw_ios_per_sec": 0, 00:14:23.214 "rw_mbytes_per_sec": 0, 00:14:23.214 "r_mbytes_per_sec": 0, 00:14:23.214 "w_mbytes_per_sec": 0 00:14:23.214 }, 00:14:23.214 "claimed": false, 00:14:23.214 "zoned": false, 00:14:23.214 "supported_io_types": { 00:14:23.214 "read": true, 00:14:23.214 "write": true, 00:14:23.214 "unmap": false, 00:14:23.214 "flush": false, 00:14:23.214 "reset": true, 00:14:23.214 "nvme_admin": false, 00:14:23.214 "nvme_io": false, 00:14:23.214 "nvme_io_md": false, 00:14:23.214 "write_zeroes": true, 00:14:23.214 "zcopy": false, 00:14:23.214 "get_zone_info": false, 00:14:23.214 "zone_management": false, 00:14:23.214 "zone_append": false, 00:14:23.214 "compare": false, 00:14:23.214 "compare_and_write": false, 00:14:23.214 "abort": false, 00:14:23.214 "seek_hole": false, 00:14:23.214 "seek_data": false, 00:14:23.214 "copy": false, 00:14:23.214 "nvme_iov_md": false 00:14:23.214 }, 00:14:23.214 "memory_domains": [ 00:14:23.214 { 00:14:23.214 "dma_device_id": "system", 00:14:23.214 "dma_device_type": 1 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.214 "dma_device_type": 2 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "dma_device_id": "system", 00:14:23.214 "dma_device_type": 1 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.214 "dma_device_type": 2 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "dma_device_id": "system", 00:14:23.214 "dma_device_type": 1 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.214 "dma_device_type": 2 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "dma_device_id": "system", 00:14:23.214 "dma_device_type": 1 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.214 "dma_device_type": 2 00:14:23.214 } 00:14:23.214 ], 00:14:23.214 "driver_specific": { 00:14:23.214 "raid": { 00:14:23.214 "uuid": "5e441ac7-ae8c-418f-8de6-710352319fcc", 00:14:23.214 "strip_size_kb": 0, 00:14:23.214 "state": "online", 00:14:23.214 "raid_level": "raid1", 00:14:23.214 "superblock": true, 00:14:23.214 "num_base_bdevs": 4, 00:14:23.214 "num_base_bdevs_discovered": 4, 00:14:23.214 "num_base_bdevs_operational": 4, 00:14:23.214 "base_bdevs_list": [ 00:14:23.214 { 00:14:23.214 "name": "NewBaseBdev", 00:14:23.214 "uuid": "3c894d0c-3a97-4604-b2fe-0498941e1f9e", 00:14:23.214 "is_configured": true, 00:14:23.214 "data_offset": 2048, 00:14:23.214 "data_size": 63488 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "name": "BaseBdev2", 00:14:23.214 "uuid": "5ad49453-1f2e-488f-a556-bf419f078f16", 00:14:23.214 "is_configured": true, 00:14:23.214 "data_offset": 2048, 00:14:23.214 "data_size": 63488 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "name": "BaseBdev3", 00:14:23.214 "uuid": "e352434d-433f-48a4-8ddb-5671de5c7094", 00:14:23.214 "is_configured": true, 00:14:23.214 "data_offset": 2048, 00:14:23.214 "data_size": 63488 00:14:23.214 }, 00:14:23.214 { 00:14:23.214 "name": "BaseBdev4", 00:14:23.214 "uuid": "e1f22e95-699e-45d2-88e7-8f80aec150f0", 00:14:23.214 "is_configured": true, 00:14:23.214 "data_offset": 2048, 00:14:23.214 "data_size": 63488 00:14:23.214 } 00:14:23.214 ] 00:14:23.214 } 00:14:23.214 } 00:14:23.214 }' 00:14:23.214 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.214 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:23.214 BaseBdev2 00:14:23.214 BaseBdev3 00:14:23.214 BaseBdev4' 00:14:23.214 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.214 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.215 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.474 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.474 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.474 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.474 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.475 [2024-10-01 13:48:33.436165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.475 [2024-10-01 13:48:33.436203] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.475 [2024-10-01 13:48:33.436333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.475 [2024-10-01 13:48:33.436693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.475 [2024-10-01 13:48:33.436718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73766 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73766 ']' 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73766 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73766 00:14:23.475 killing process with pid 73766 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73766' 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73766 00:14:23.475 [2024-10-01 13:48:33.485890] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.475 13:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73766 00:14:24.043 [2024-10-01 13:48:33.933299] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.421 ************************************ 00:14:25.421 END TEST raid_state_function_test_sb 00:14:25.421 ************************************ 00:14:25.421 13:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:25.421 00:14:25.421 real 0m11.949s 00:14:25.421 user 0m18.422s 00:14:25.421 sys 0m2.627s 00:14:25.421 13:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.421 13:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.421 13:48:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:25.421 13:48:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:25.421 13:48:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.421 13:48:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.421 ************************************ 00:14:25.421 START TEST raid_superblock_test 00:14:25.421 ************************************ 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74438 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74438 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74438 ']' 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.421 13:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.421 [2024-10-01 13:48:35.556867] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:25.421 [2024-10-01 13:48:35.557869] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74438 ] 00:14:25.680 [2024-10-01 13:48:35.734758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.938 [2024-10-01 13:48:36.012370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.197 [2024-10-01 13:48:36.248814] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.197 [2024-10-01 13:48:36.249091] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.457 malloc1 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.457 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.457 [2024-10-01 13:48:36.474808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.457 [2024-10-01 13:48:36.475050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.457 [2024-10-01 13:48:36.475106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:26.457 [2024-10-01 13:48:36.475126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.458 [2024-10-01 13:48:36.478040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.458 [2024-10-01 13:48:36.478203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.458 pt1 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.458 malloc2 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.458 [2024-10-01 13:48:36.545700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.458 [2024-10-01 13:48:36.545908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.458 [2024-10-01 13:48:36.546039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:26.458 [2024-10-01 13:48:36.546113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.458 [2024-10-01 13:48:36.549060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.458 [2024-10-01 13:48:36.549208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.458 pt2 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.458 malloc3 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.458 [2024-10-01 13:48:36.610051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:26.458 [2024-10-01 13:48:36.610234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.458 [2024-10-01 13:48:36.610296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:26.458 [2024-10-01 13:48:36.610372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.458 [2024-10-01 13:48:36.613136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.458 [2024-10-01 13:48:36.613277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:26.458 pt3 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.458 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.718 malloc4 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.718 [2024-10-01 13:48:36.676788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:26.718 [2024-10-01 13:48:36.676973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.718 [2024-10-01 13:48:36.677035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:26.718 [2024-10-01 13:48:36.677216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.718 [2024-10-01 13:48:36.679992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.718 [2024-10-01 13:48:36.680138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:26.718 pt4 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.718 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.719 [2024-10-01 13:48:36.688869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.719 [2024-10-01 13:48:36.691381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.719 [2024-10-01 13:48:36.691609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:26.719 [2024-10-01 13:48:36.691667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:26.719 [2024-10-01 13:48:36.691888] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:26.719 [2024-10-01 13:48:36.691901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.719 [2024-10-01 13:48:36.692246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:26.719 [2024-10-01 13:48:36.692450] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:26.719 [2024-10-01 13:48:36.692467] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:26.719 [2024-10-01 13:48:36.692711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.719 "name": "raid_bdev1", 00:14:26.719 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:26.719 "strip_size_kb": 0, 00:14:26.719 "state": "online", 00:14:26.719 "raid_level": "raid1", 00:14:26.719 "superblock": true, 00:14:26.719 "num_base_bdevs": 4, 00:14:26.719 "num_base_bdevs_discovered": 4, 00:14:26.719 "num_base_bdevs_operational": 4, 00:14:26.719 "base_bdevs_list": [ 00:14:26.719 { 00:14:26.719 "name": "pt1", 00:14:26.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.719 "is_configured": true, 00:14:26.719 "data_offset": 2048, 00:14:26.719 "data_size": 63488 00:14:26.719 }, 00:14:26.719 { 00:14:26.719 "name": "pt2", 00:14:26.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.719 "is_configured": true, 00:14:26.719 "data_offset": 2048, 00:14:26.719 "data_size": 63488 00:14:26.719 }, 00:14:26.719 { 00:14:26.719 "name": "pt3", 00:14:26.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.719 "is_configured": true, 00:14:26.719 "data_offset": 2048, 00:14:26.719 "data_size": 63488 00:14:26.719 }, 00:14:26.719 { 00:14:26.719 "name": "pt4", 00:14:26.719 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:26.719 "is_configured": true, 00:14:26.719 "data_offset": 2048, 00:14:26.719 "data_size": 63488 00:14:26.719 } 00:14:26.719 ] 00:14:26.719 }' 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.719 13:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.978 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.978 [2024-10-01 13:48:37.140709] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.263 "name": "raid_bdev1", 00:14:27.263 "aliases": [ 00:14:27.263 "74f13d0d-0253-4fb0-8d99-607d180d37b5" 00:14:27.263 ], 00:14:27.263 "product_name": "Raid Volume", 00:14:27.263 "block_size": 512, 00:14:27.263 "num_blocks": 63488, 00:14:27.263 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:27.263 "assigned_rate_limits": { 00:14:27.263 "rw_ios_per_sec": 0, 00:14:27.263 "rw_mbytes_per_sec": 0, 00:14:27.263 "r_mbytes_per_sec": 0, 00:14:27.263 "w_mbytes_per_sec": 0 00:14:27.263 }, 00:14:27.263 "claimed": false, 00:14:27.263 "zoned": false, 00:14:27.263 "supported_io_types": { 00:14:27.263 "read": true, 00:14:27.263 "write": true, 00:14:27.263 "unmap": false, 00:14:27.263 "flush": false, 00:14:27.263 "reset": true, 00:14:27.263 "nvme_admin": false, 00:14:27.263 "nvme_io": false, 00:14:27.263 "nvme_io_md": false, 00:14:27.263 "write_zeroes": true, 00:14:27.263 "zcopy": false, 00:14:27.263 "get_zone_info": false, 00:14:27.263 "zone_management": false, 00:14:27.263 "zone_append": false, 00:14:27.263 "compare": false, 00:14:27.263 "compare_and_write": false, 00:14:27.263 "abort": false, 00:14:27.263 "seek_hole": false, 00:14:27.263 "seek_data": false, 00:14:27.263 "copy": false, 00:14:27.263 "nvme_iov_md": false 00:14:27.263 }, 00:14:27.263 "memory_domains": [ 00:14:27.263 { 00:14:27.263 "dma_device_id": "system", 00:14:27.263 "dma_device_type": 1 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.263 "dma_device_type": 2 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "dma_device_id": "system", 00:14:27.263 "dma_device_type": 1 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.263 "dma_device_type": 2 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "dma_device_id": "system", 00:14:27.263 "dma_device_type": 1 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.263 "dma_device_type": 2 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "dma_device_id": "system", 00:14:27.263 "dma_device_type": 1 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.263 "dma_device_type": 2 00:14:27.263 } 00:14:27.263 ], 00:14:27.263 "driver_specific": { 00:14:27.263 "raid": { 00:14:27.263 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:27.263 "strip_size_kb": 0, 00:14:27.263 "state": "online", 00:14:27.263 "raid_level": "raid1", 00:14:27.263 "superblock": true, 00:14:27.263 "num_base_bdevs": 4, 00:14:27.263 "num_base_bdevs_discovered": 4, 00:14:27.263 "num_base_bdevs_operational": 4, 00:14:27.263 "base_bdevs_list": [ 00:14:27.263 { 00:14:27.263 "name": "pt1", 00:14:27.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.263 "is_configured": true, 00:14:27.263 "data_offset": 2048, 00:14:27.263 "data_size": 63488 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "name": "pt2", 00:14:27.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.263 "is_configured": true, 00:14:27.263 "data_offset": 2048, 00:14:27.263 "data_size": 63488 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "name": "pt3", 00:14:27.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.263 "is_configured": true, 00:14:27.263 "data_offset": 2048, 00:14:27.263 "data_size": 63488 00:14:27.263 }, 00:14:27.263 { 00:14:27.263 "name": "pt4", 00:14:27.263 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.263 "is_configured": true, 00:14:27.263 "data_offset": 2048, 00:14:27.263 "data_size": 63488 00:14:27.263 } 00:14:27.263 ] 00:14:27.263 } 00:14:27.263 } 00:14:27.263 }' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:27.263 pt2 00:14:27.263 pt3 00:14:27.263 pt4' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.263 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.264 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.264 [2024-10-01 13:48:37.444215] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74f13d0d-0253-4fb0-8d99-607d180d37b5 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 74f13d0d-0253-4fb0-8d99-607d180d37b5 ']' 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.523 [2024-10-01 13:48:37.491811] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.523 [2024-10-01 13:48:37.491960] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.523 [2024-10-01 13:48:37.492204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.523 [2024-10-01 13:48:37.492419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.523 [2024-10-01 13:48:37.492543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.523 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.524 [2024-10-01 13:48:37.663659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:27.524 [2024-10-01 13:48:37.666242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:27.524 [2024-10-01 13:48:37.666441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:27.524 [2024-10-01 13:48:37.666495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:27.524 [2024-10-01 13:48:37.666561] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:27.524 [2024-10-01 13:48:37.666627] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:27.524 [2024-10-01 13:48:37.666650] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:27.524 [2024-10-01 13:48:37.666674] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:27.524 [2024-10-01 13:48:37.666691] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.524 [2024-10-01 13:48:37.666706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:27.524 request: 00:14:27.524 { 00:14:27.524 "name": "raid_bdev1", 00:14:27.524 "raid_level": "raid1", 00:14:27.524 "base_bdevs": [ 00:14:27.524 "malloc1", 00:14:27.524 "malloc2", 00:14:27.524 "malloc3", 00:14:27.524 "malloc4" 00:14:27.524 ], 00:14:27.524 "superblock": false, 00:14:27.524 "method": "bdev_raid_create", 00:14:27.524 "req_id": 1 00:14:27.524 } 00:14:27.524 Got JSON-RPC error response 00:14:27.524 response: 00:14:27.524 { 00:14:27.524 "code": -17, 00:14:27.524 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:27.524 } 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.524 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.784 [2024-10-01 13:48:37.735592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.784 [2024-10-01 13:48:37.735770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.784 [2024-10-01 13:48:37.735905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:27.784 [2024-10-01 13:48:37.735994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.784 [2024-10-01 13:48:37.738880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.784 [2024-10-01 13:48:37.739021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.784 [2024-10-01 13:48:37.739191] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:27.784 [2024-10-01 13:48:37.739322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.784 pt1 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.784 "name": "raid_bdev1", 00:14:27.784 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:27.784 "strip_size_kb": 0, 00:14:27.784 "state": "configuring", 00:14:27.784 "raid_level": "raid1", 00:14:27.784 "superblock": true, 00:14:27.784 "num_base_bdevs": 4, 00:14:27.784 "num_base_bdevs_discovered": 1, 00:14:27.784 "num_base_bdevs_operational": 4, 00:14:27.784 "base_bdevs_list": [ 00:14:27.784 { 00:14:27.784 "name": "pt1", 00:14:27.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.784 "is_configured": true, 00:14:27.784 "data_offset": 2048, 00:14:27.784 "data_size": 63488 00:14:27.784 }, 00:14:27.784 { 00:14:27.784 "name": null, 00:14:27.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.784 "is_configured": false, 00:14:27.784 "data_offset": 2048, 00:14:27.784 "data_size": 63488 00:14:27.784 }, 00:14:27.784 { 00:14:27.784 "name": null, 00:14:27.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.784 "is_configured": false, 00:14:27.784 "data_offset": 2048, 00:14:27.784 "data_size": 63488 00:14:27.784 }, 00:14:27.784 { 00:14:27.784 "name": null, 00:14:27.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.784 "is_configured": false, 00:14:27.784 "data_offset": 2048, 00:14:27.784 "data_size": 63488 00:14:27.784 } 00:14:27.784 ] 00:14:27.784 }' 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.784 13:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.044 [2024-10-01 13:48:38.159602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.044 [2024-10-01 13:48:38.159695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.044 [2024-10-01 13:48:38.159723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:28.044 [2024-10-01 13:48:38.159739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.044 [2024-10-01 13:48:38.160374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.044 [2024-10-01 13:48:38.160419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.044 [2024-10-01 13:48:38.160528] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.044 [2024-10-01 13:48:38.160572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.044 pt2 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.044 [2024-10-01 13:48:38.171593] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.044 "name": "raid_bdev1", 00:14:28.044 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:28.044 "strip_size_kb": 0, 00:14:28.044 "state": "configuring", 00:14:28.044 "raid_level": "raid1", 00:14:28.044 "superblock": true, 00:14:28.044 "num_base_bdevs": 4, 00:14:28.044 "num_base_bdevs_discovered": 1, 00:14:28.044 "num_base_bdevs_operational": 4, 00:14:28.044 "base_bdevs_list": [ 00:14:28.044 { 00:14:28.044 "name": "pt1", 00:14:28.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.044 "is_configured": true, 00:14:28.044 "data_offset": 2048, 00:14:28.044 "data_size": 63488 00:14:28.044 }, 00:14:28.044 { 00:14:28.044 "name": null, 00:14:28.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.044 "is_configured": false, 00:14:28.044 "data_offset": 0, 00:14:28.044 "data_size": 63488 00:14:28.044 }, 00:14:28.044 { 00:14:28.044 "name": null, 00:14:28.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.044 "is_configured": false, 00:14:28.044 "data_offset": 2048, 00:14:28.044 "data_size": 63488 00:14:28.044 }, 00:14:28.044 { 00:14:28.044 "name": null, 00:14:28.044 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.044 "is_configured": false, 00:14:28.044 "data_offset": 2048, 00:14:28.044 "data_size": 63488 00:14:28.044 } 00:14:28.044 ] 00:14:28.044 }' 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.044 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.613 [2024-10-01 13:48:38.615615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.613 [2024-10-01 13:48:38.615846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.613 [2024-10-01 13:48:38.615941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:28.613 [2024-10-01 13:48:38.616032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.613 [2024-10-01 13:48:38.616695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.613 [2024-10-01 13:48:38.616731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.613 [2024-10-01 13:48:38.616850] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.613 [2024-10-01 13:48:38.616878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.613 pt2 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.613 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.613 [2024-10-01 13:48:38.627590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:28.613 [2024-10-01 13:48:38.627662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.613 [2024-10-01 13:48:38.627689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:28.613 [2024-10-01 13:48:38.627701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.614 [2024-10-01 13:48:38.628249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.614 [2024-10-01 13:48:38.628268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:28.614 [2024-10-01 13:48:38.628372] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:28.614 [2024-10-01 13:48:38.628413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:28.614 pt3 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.614 [2024-10-01 13:48:38.639553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:28.614 [2024-10-01 13:48:38.639736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.614 [2024-10-01 13:48:38.639798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:28.614 [2024-10-01 13:48:38.639878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.614 [2024-10-01 13:48:38.640463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.614 [2024-10-01 13:48:38.640598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:28.614 [2024-10-01 13:48:38.640792] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:28.614 [2024-10-01 13:48:38.640900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:28.614 [2024-10-01 13:48:38.641120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.614 [2024-10-01 13:48:38.641215] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.614 [2024-10-01 13:48:38.641570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:28.614 [2024-10-01 13:48:38.641837] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.614 [2024-10-01 13:48:38.641945] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:28.614 [2024-10-01 13:48:38.642194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.614 pt4 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.614 "name": "raid_bdev1", 00:14:28.614 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:28.614 "strip_size_kb": 0, 00:14:28.614 "state": "online", 00:14:28.614 "raid_level": "raid1", 00:14:28.614 "superblock": true, 00:14:28.614 "num_base_bdevs": 4, 00:14:28.614 "num_base_bdevs_discovered": 4, 00:14:28.614 "num_base_bdevs_operational": 4, 00:14:28.614 "base_bdevs_list": [ 00:14:28.614 { 00:14:28.614 "name": "pt1", 00:14:28.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.614 "is_configured": true, 00:14:28.614 "data_offset": 2048, 00:14:28.614 "data_size": 63488 00:14:28.614 }, 00:14:28.614 { 00:14:28.614 "name": "pt2", 00:14:28.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.614 "is_configured": true, 00:14:28.614 "data_offset": 2048, 00:14:28.614 "data_size": 63488 00:14:28.614 }, 00:14:28.614 { 00:14:28.614 "name": "pt3", 00:14:28.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.614 "is_configured": true, 00:14:28.614 "data_offset": 2048, 00:14:28.614 "data_size": 63488 00:14:28.614 }, 00:14:28.614 { 00:14:28.614 "name": "pt4", 00:14:28.614 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.614 "is_configured": true, 00:14:28.614 "data_offset": 2048, 00:14:28.614 "data_size": 63488 00:14:28.614 } 00:14:28.614 ] 00:14:28.614 }' 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.614 13:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.873 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:28.873 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:28.873 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.873 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.873 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.873 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.132 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.132 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.132 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.132 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.132 [2024-10-01 13:48:39.075374] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.132 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.132 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.132 "name": "raid_bdev1", 00:14:29.132 "aliases": [ 00:14:29.132 "74f13d0d-0253-4fb0-8d99-607d180d37b5" 00:14:29.132 ], 00:14:29.132 "product_name": "Raid Volume", 00:14:29.132 "block_size": 512, 00:14:29.132 "num_blocks": 63488, 00:14:29.132 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:29.132 "assigned_rate_limits": { 00:14:29.132 "rw_ios_per_sec": 0, 00:14:29.132 "rw_mbytes_per_sec": 0, 00:14:29.132 "r_mbytes_per_sec": 0, 00:14:29.132 "w_mbytes_per_sec": 0 00:14:29.132 }, 00:14:29.132 "claimed": false, 00:14:29.132 "zoned": false, 00:14:29.132 "supported_io_types": { 00:14:29.132 "read": true, 00:14:29.132 "write": true, 00:14:29.132 "unmap": false, 00:14:29.132 "flush": false, 00:14:29.132 "reset": true, 00:14:29.132 "nvme_admin": false, 00:14:29.132 "nvme_io": false, 00:14:29.132 "nvme_io_md": false, 00:14:29.132 "write_zeroes": true, 00:14:29.132 "zcopy": false, 00:14:29.132 "get_zone_info": false, 00:14:29.133 "zone_management": false, 00:14:29.133 "zone_append": false, 00:14:29.133 "compare": false, 00:14:29.133 "compare_and_write": false, 00:14:29.133 "abort": false, 00:14:29.133 "seek_hole": false, 00:14:29.133 "seek_data": false, 00:14:29.133 "copy": false, 00:14:29.133 "nvme_iov_md": false 00:14:29.133 }, 00:14:29.133 "memory_domains": [ 00:14:29.133 { 00:14:29.133 "dma_device_id": "system", 00:14:29.133 "dma_device_type": 1 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.133 "dma_device_type": 2 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "dma_device_id": "system", 00:14:29.133 "dma_device_type": 1 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.133 "dma_device_type": 2 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "dma_device_id": "system", 00:14:29.133 "dma_device_type": 1 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.133 "dma_device_type": 2 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "dma_device_id": "system", 00:14:29.133 "dma_device_type": 1 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.133 "dma_device_type": 2 00:14:29.133 } 00:14:29.133 ], 00:14:29.133 "driver_specific": { 00:14:29.133 "raid": { 00:14:29.133 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:29.133 "strip_size_kb": 0, 00:14:29.133 "state": "online", 00:14:29.133 "raid_level": "raid1", 00:14:29.133 "superblock": true, 00:14:29.133 "num_base_bdevs": 4, 00:14:29.133 "num_base_bdevs_discovered": 4, 00:14:29.133 "num_base_bdevs_operational": 4, 00:14:29.133 "base_bdevs_list": [ 00:14:29.133 { 00:14:29.133 "name": "pt1", 00:14:29.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.133 "is_configured": true, 00:14:29.133 "data_offset": 2048, 00:14:29.133 "data_size": 63488 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "name": "pt2", 00:14:29.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.133 "is_configured": true, 00:14:29.133 "data_offset": 2048, 00:14:29.133 "data_size": 63488 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "name": "pt3", 00:14:29.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.133 "is_configured": true, 00:14:29.133 "data_offset": 2048, 00:14:29.133 "data_size": 63488 00:14:29.133 }, 00:14:29.133 { 00:14:29.133 "name": "pt4", 00:14:29.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.133 "is_configured": true, 00:14:29.133 "data_offset": 2048, 00:14:29.133 "data_size": 63488 00:14:29.133 } 00:14:29.133 ] 00:14:29.133 } 00:14:29.133 } 00:14:29.133 }' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:29.133 pt2 00:14:29.133 pt3 00:14:29.133 pt4' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.133 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.392 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.392 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.392 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:29.392 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.393 [2024-10-01 13:48:39.346936] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 74f13d0d-0253-4fb0-8d99-607d180d37b5 '!=' 74f13d0d-0253-4fb0-8d99-607d180d37b5 ']' 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.393 [2024-10-01 13:48:39.402635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.393 "name": "raid_bdev1", 00:14:29.393 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:29.393 "strip_size_kb": 0, 00:14:29.393 "state": "online", 00:14:29.393 "raid_level": "raid1", 00:14:29.393 "superblock": true, 00:14:29.393 "num_base_bdevs": 4, 00:14:29.393 "num_base_bdevs_discovered": 3, 00:14:29.393 "num_base_bdevs_operational": 3, 00:14:29.393 "base_bdevs_list": [ 00:14:29.393 { 00:14:29.393 "name": null, 00:14:29.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.393 "is_configured": false, 00:14:29.393 "data_offset": 0, 00:14:29.393 "data_size": 63488 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "name": "pt2", 00:14:29.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.393 "is_configured": true, 00:14:29.393 "data_offset": 2048, 00:14:29.393 "data_size": 63488 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "name": "pt3", 00:14:29.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.393 "is_configured": true, 00:14:29.393 "data_offset": 2048, 00:14:29.393 "data_size": 63488 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "name": "pt4", 00:14:29.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.393 "is_configured": true, 00:14:29.393 "data_offset": 2048, 00:14:29.393 "data_size": 63488 00:14:29.393 } 00:14:29.393 ] 00:14:29.393 }' 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.393 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.999 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:29.999 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.999 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.999 [2024-10-01 13:48:39.882569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.999 [2024-10-01 13:48:39.882742] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.999 [2024-10-01 13:48:39.882872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.000 [2024-10-01 13:48:39.882973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.000 [2024-10-01 13:48:39.882987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.000 [2024-10-01 13:48:39.974540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.000 [2024-10-01 13:48:39.974739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.000 [2024-10-01 13:48:39.974803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:30.000 [2024-10-01 13:48:39.974876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.000 [2024-10-01 13:48:39.978386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.000 [2024-10-01 13:48:39.978547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.000 [2024-10-01 13:48:39.978755] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:30.000 [2024-10-01 13:48:39.978935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.000 pt2 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.000 13:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.000 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.000 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.000 "name": "raid_bdev1", 00:14:30.000 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:30.000 "strip_size_kb": 0, 00:14:30.000 "state": "configuring", 00:14:30.000 "raid_level": "raid1", 00:14:30.000 "superblock": true, 00:14:30.000 "num_base_bdevs": 4, 00:14:30.000 "num_base_bdevs_discovered": 1, 00:14:30.000 "num_base_bdevs_operational": 3, 00:14:30.000 "base_bdevs_list": [ 00:14:30.000 { 00:14:30.000 "name": null, 00:14:30.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.000 "is_configured": false, 00:14:30.000 "data_offset": 2048, 00:14:30.000 "data_size": 63488 00:14:30.000 }, 00:14:30.000 { 00:14:30.000 "name": "pt2", 00:14:30.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.000 "is_configured": true, 00:14:30.000 "data_offset": 2048, 00:14:30.000 "data_size": 63488 00:14:30.000 }, 00:14:30.000 { 00:14:30.000 "name": null, 00:14:30.000 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.000 "is_configured": false, 00:14:30.000 "data_offset": 2048, 00:14:30.000 "data_size": 63488 00:14:30.000 }, 00:14:30.000 { 00:14:30.000 "name": null, 00:14:30.000 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.000 "is_configured": false, 00:14:30.000 "data_offset": 2048, 00:14:30.000 "data_size": 63488 00:14:30.000 } 00:14:30.000 ] 00:14:30.000 }' 00:14:30.000 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.000 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.259 [2024-10-01 13:48:40.394312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:30.259 [2024-10-01 13:48:40.394640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.259 [2024-10-01 13:48:40.394758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:30.259 [2024-10-01 13:48:40.394848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.259 [2024-10-01 13:48:40.395503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.259 [2024-10-01 13:48:40.395649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:30.259 [2024-10-01 13:48:40.395856] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:30.259 [2024-10-01 13:48:40.395988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:30.259 pt3 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.259 "name": "raid_bdev1", 00:14:30.259 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:30.259 "strip_size_kb": 0, 00:14:30.259 "state": "configuring", 00:14:30.259 "raid_level": "raid1", 00:14:30.259 "superblock": true, 00:14:30.259 "num_base_bdevs": 4, 00:14:30.259 "num_base_bdevs_discovered": 2, 00:14:30.259 "num_base_bdevs_operational": 3, 00:14:30.259 "base_bdevs_list": [ 00:14:30.259 { 00:14:30.259 "name": null, 00:14:30.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.259 "is_configured": false, 00:14:30.259 "data_offset": 2048, 00:14:30.259 "data_size": 63488 00:14:30.259 }, 00:14:30.259 { 00:14:30.259 "name": "pt2", 00:14:30.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.259 "is_configured": true, 00:14:30.259 "data_offset": 2048, 00:14:30.259 "data_size": 63488 00:14:30.259 }, 00:14:30.259 { 00:14:30.259 "name": "pt3", 00:14:30.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.259 "is_configured": true, 00:14:30.259 "data_offset": 2048, 00:14:30.259 "data_size": 63488 00:14:30.259 }, 00:14:30.259 { 00:14:30.259 "name": null, 00:14:30.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.259 "is_configured": false, 00:14:30.259 "data_offset": 2048, 00:14:30.259 "data_size": 63488 00:14:30.259 } 00:14:30.259 ] 00:14:30.259 }' 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.259 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.827 [2024-10-01 13:48:40.849712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:30.827 [2024-10-01 13:48:40.850018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.827 [2024-10-01 13:48:40.850088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:30.827 [2024-10-01 13:48:40.850178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.827 [2024-10-01 13:48:40.850770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.827 [2024-10-01 13:48:40.850795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:30.827 [2024-10-01 13:48:40.850887] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:30.827 [2024-10-01 13:48:40.850918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:30.827 [2024-10-01 13:48:40.851073] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:30.827 [2024-10-01 13:48:40.851085] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.827 [2024-10-01 13:48:40.851365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:30.827 [2024-10-01 13:48:40.851587] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:30.827 [2024-10-01 13:48:40.851603] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:30.827 [2024-10-01 13:48:40.851747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.827 pt4 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.827 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.827 "name": "raid_bdev1", 00:14:30.827 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:30.827 "strip_size_kb": 0, 00:14:30.827 "state": "online", 00:14:30.827 "raid_level": "raid1", 00:14:30.827 "superblock": true, 00:14:30.827 "num_base_bdevs": 4, 00:14:30.827 "num_base_bdevs_discovered": 3, 00:14:30.827 "num_base_bdevs_operational": 3, 00:14:30.827 "base_bdevs_list": [ 00:14:30.827 { 00:14:30.827 "name": null, 00:14:30.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.827 "is_configured": false, 00:14:30.827 "data_offset": 2048, 00:14:30.827 "data_size": 63488 00:14:30.827 }, 00:14:30.827 { 00:14:30.827 "name": "pt2", 00:14:30.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.827 "is_configured": true, 00:14:30.827 "data_offset": 2048, 00:14:30.827 "data_size": 63488 00:14:30.827 }, 00:14:30.828 { 00:14:30.828 "name": "pt3", 00:14:30.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.828 "is_configured": true, 00:14:30.828 "data_offset": 2048, 00:14:30.828 "data_size": 63488 00:14:30.828 }, 00:14:30.828 { 00:14:30.828 "name": "pt4", 00:14:30.828 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.828 "is_configured": true, 00:14:30.828 "data_offset": 2048, 00:14:30.828 "data_size": 63488 00:14:30.828 } 00:14:30.828 ] 00:14:30.828 }' 00:14:30.828 13:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.828 13:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.395 [2024-10-01 13:48:41.289152] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.395 [2024-10-01 13:48:41.289199] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.395 [2024-10-01 13:48:41.289280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.395 [2024-10-01 13:48:41.289359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.395 [2024-10-01 13:48:41.289415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.395 [2024-10-01 13:48:41.353038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.395 [2024-10-01 13:48:41.353124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.395 [2024-10-01 13:48:41.353146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:31.395 [2024-10-01 13:48:41.353163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.395 [2024-10-01 13:48:41.356337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.395 [2024-10-01 13:48:41.356390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.395 [2024-10-01 13:48:41.356493] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.395 [2024-10-01 13:48:41.356547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.395 [2024-10-01 13:48:41.356671] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:31.395 [2024-10-01 13:48:41.356689] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.395 [2024-10-01 13:48:41.356706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:31.395 [2024-10-01 13:48:41.356787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.395 [2024-10-01 13:48:41.356884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:31.395 pt1 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.395 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.395 "name": "raid_bdev1", 00:14:31.395 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:31.395 "strip_size_kb": 0, 00:14:31.395 "state": "configuring", 00:14:31.395 "raid_level": "raid1", 00:14:31.395 "superblock": true, 00:14:31.395 "num_base_bdevs": 4, 00:14:31.395 "num_base_bdevs_discovered": 2, 00:14:31.395 "num_base_bdevs_operational": 3, 00:14:31.395 "base_bdevs_list": [ 00:14:31.395 { 00:14:31.395 "name": null, 00:14:31.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.395 "is_configured": false, 00:14:31.395 "data_offset": 2048, 00:14:31.395 "data_size": 63488 00:14:31.395 }, 00:14:31.395 { 00:14:31.395 "name": "pt2", 00:14:31.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.395 "is_configured": true, 00:14:31.395 "data_offset": 2048, 00:14:31.395 "data_size": 63488 00:14:31.395 }, 00:14:31.395 { 00:14:31.395 "name": "pt3", 00:14:31.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.395 "is_configured": true, 00:14:31.395 "data_offset": 2048, 00:14:31.395 "data_size": 63488 00:14:31.396 }, 00:14:31.396 { 00:14:31.396 "name": null, 00:14:31.396 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.396 "is_configured": false, 00:14:31.396 "data_offset": 2048, 00:14:31.396 "data_size": 63488 00:14:31.396 } 00:14:31.396 ] 00:14:31.396 }' 00:14:31.396 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.396 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.654 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:31.654 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:31.654 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.654 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.654 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.655 [2024-10-01 13:48:41.832495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:31.655 [2024-10-01 13:48:41.832774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.655 [2024-10-01 13:48:41.832815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:31.655 [2024-10-01 13:48:41.832829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.655 [2024-10-01 13:48:41.833362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.655 [2024-10-01 13:48:41.833384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:31.655 [2024-10-01 13:48:41.833494] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:31.655 [2024-10-01 13:48:41.833519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:31.655 [2024-10-01 13:48:41.833655] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:31.655 [2024-10-01 13:48:41.833666] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:31.655 [2024-10-01 13:48:41.833985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:31.655 [2024-10-01 13:48:41.834123] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:31.655 [2024-10-01 13:48:41.834139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:31.655 [2024-10-01 13:48:41.834278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.655 pt4 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.655 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.914 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.914 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.914 "name": "raid_bdev1", 00:14:31.914 "uuid": "74f13d0d-0253-4fb0-8d99-607d180d37b5", 00:14:31.914 "strip_size_kb": 0, 00:14:31.914 "state": "online", 00:14:31.914 "raid_level": "raid1", 00:14:31.914 "superblock": true, 00:14:31.914 "num_base_bdevs": 4, 00:14:31.914 "num_base_bdevs_discovered": 3, 00:14:31.914 "num_base_bdevs_operational": 3, 00:14:31.914 "base_bdevs_list": [ 00:14:31.914 { 00:14:31.914 "name": null, 00:14:31.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.914 "is_configured": false, 00:14:31.914 "data_offset": 2048, 00:14:31.914 "data_size": 63488 00:14:31.914 }, 00:14:31.914 { 00:14:31.914 "name": "pt2", 00:14:31.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.914 "is_configured": true, 00:14:31.914 "data_offset": 2048, 00:14:31.914 "data_size": 63488 00:14:31.914 }, 00:14:31.914 { 00:14:31.914 "name": "pt3", 00:14:31.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.914 "is_configured": true, 00:14:31.914 "data_offset": 2048, 00:14:31.914 "data_size": 63488 00:14:31.914 }, 00:14:31.914 { 00:14:31.914 "name": "pt4", 00:14:31.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.914 "is_configured": true, 00:14:31.914 "data_offset": 2048, 00:14:31.914 "data_size": 63488 00:14:31.914 } 00:14:31.914 ] 00:14:31.914 }' 00:14:31.914 13:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.914 13:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.173 [2024-10-01 13:48:42.284116] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 74f13d0d-0253-4fb0-8d99-607d180d37b5 '!=' 74f13d0d-0253-4fb0-8d99-607d180d37b5 ']' 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74438 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74438 ']' 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74438 00:14:32.173 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:32.174 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.174 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74438 00:14:32.432 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.432 killing process with pid 74438 00:14:32.432 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.432 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74438' 00:14:32.432 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74438 00:14:32.432 [2024-10-01 13:48:42.368102] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.432 13:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74438 00:14:32.432 [2024-10-01 13:48:42.368216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.432 [2024-10-01 13:48:42.368297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.432 [2024-10-01 13:48:42.368312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:32.691 [2024-10-01 13:48:42.776126] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.070 ************************************ 00:14:34.070 END TEST raid_superblock_test 00:14:34.070 ************************************ 00:14:34.070 13:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:34.070 00:14:34.070 real 0m8.660s 00:14:34.070 user 0m13.263s 00:14:34.070 sys 0m1.932s 00:14:34.070 13:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.070 13:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.070 13:48:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:34.070 13:48:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:34.070 13:48:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.070 13:48:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.070 ************************************ 00:14:34.070 START TEST raid_read_error_test 00:14:34.070 ************************************ 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v3fIPtNrkN 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74931 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74931 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74931 ']' 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.070 13:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.071 13:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.071 13:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.330 [2024-10-01 13:48:44.312030] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:34.330 [2024-10-01 13:48:44.312187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74931 ] 00:14:34.330 [2024-10-01 13:48:44.488046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.588 [2024-10-01 13:48:44.703522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.847 [2024-10-01 13:48:44.927184] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.847 [2024-10-01 13:48:44.927259] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.106 BaseBdev1_malloc 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.106 true 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.106 [2024-10-01 13:48:45.214835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:35.106 [2024-10-01 13:48:45.215137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.106 [2024-10-01 13:48:45.215198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:35.106 [2024-10-01 13:48:45.215292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.106 [2024-10-01 13:48:45.217979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.106 [2024-10-01 13:48:45.218031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.106 BaseBdev1 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.106 BaseBdev2_malloc 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.106 true 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.106 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 [2024-10-01 13:48:45.300626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:35.364 [2024-10-01 13:48:45.300898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.364 [2024-10-01 13:48:45.300966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:35.364 [2024-10-01 13:48:45.301064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.364 [2024-10-01 13:48:45.304416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.364 [2024-10-01 13:48:45.304465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.364 BaseBdev2 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 BaseBdev3_malloc 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.364 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 true 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.365 [2024-10-01 13:48:45.374829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:35.365 [2024-10-01 13:48:45.375078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.365 [2024-10-01 13:48:45.375108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:35.365 [2024-10-01 13:48:45.375128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.365 [2024-10-01 13:48:45.378309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.365 BaseBdev3 00:14:35.365 [2024-10-01 13:48:45.378535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.365 BaseBdev4_malloc 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.365 true 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.365 [2024-10-01 13:48:45.448180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:35.365 [2024-10-01 13:48:45.448259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.365 [2024-10-01 13:48:45.448281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:35.365 [2024-10-01 13:48:45.448299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.365 [2024-10-01 13:48:45.451341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.365 [2024-10-01 13:48:45.451392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:35.365 BaseBdev4 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.365 [2024-10-01 13:48:45.460359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.365 [2024-10-01 13:48:45.463349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.365 [2024-10-01 13:48:45.463599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.365 [2024-10-01 13:48:45.463709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:35.365 [2024-10-01 13:48:45.464070] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:35.365 [2024-10-01 13:48:45.464125] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:35.365 [2024-10-01 13:48:45.464538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:35.365 [2024-10-01 13:48:45.464880] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:35.365 [2024-10-01 13:48:45.464902] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:35.365 [2024-10-01 13:48:45.465137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.365 "name": "raid_bdev1", 00:14:35.365 "uuid": "f12d6803-fd3a-4dff-8943-5b33502025a1", 00:14:35.365 "strip_size_kb": 0, 00:14:35.365 "state": "online", 00:14:35.365 "raid_level": "raid1", 00:14:35.365 "superblock": true, 00:14:35.365 "num_base_bdevs": 4, 00:14:35.365 "num_base_bdevs_discovered": 4, 00:14:35.365 "num_base_bdevs_operational": 4, 00:14:35.365 "base_bdevs_list": [ 00:14:35.365 { 00:14:35.365 "name": "BaseBdev1", 00:14:35.365 "uuid": "104a4c78-3480-50bd-9d9b-7d4bba762c18", 00:14:35.365 "is_configured": true, 00:14:35.365 "data_offset": 2048, 00:14:35.365 "data_size": 63488 00:14:35.365 }, 00:14:35.365 { 00:14:35.365 "name": "BaseBdev2", 00:14:35.365 "uuid": "8eccb1cf-1bb3-5ff5-a1f4-94d378695cdd", 00:14:35.365 "is_configured": true, 00:14:35.365 "data_offset": 2048, 00:14:35.365 "data_size": 63488 00:14:35.365 }, 00:14:35.365 { 00:14:35.365 "name": "BaseBdev3", 00:14:35.365 "uuid": "6ed2148c-5ba4-5d8a-bc42-d150ea6dbe13", 00:14:35.365 "is_configured": true, 00:14:35.365 "data_offset": 2048, 00:14:35.365 "data_size": 63488 00:14:35.365 }, 00:14:35.365 { 00:14:35.365 "name": "BaseBdev4", 00:14:35.365 "uuid": "ffe5a171-5f69-54de-a30f-0c2a2f809dd3", 00:14:35.365 "is_configured": true, 00:14:35.365 "data_offset": 2048, 00:14:35.365 "data_size": 63488 00:14:35.365 } 00:14:35.365 ] 00:14:35.365 }' 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.365 13:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.931 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:35.931 13:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:35.931 [2024-10-01 13:48:45.978246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.902 "name": "raid_bdev1", 00:14:36.902 "uuid": "f12d6803-fd3a-4dff-8943-5b33502025a1", 00:14:36.902 "strip_size_kb": 0, 00:14:36.902 "state": "online", 00:14:36.902 "raid_level": "raid1", 00:14:36.902 "superblock": true, 00:14:36.902 "num_base_bdevs": 4, 00:14:36.902 "num_base_bdevs_discovered": 4, 00:14:36.902 "num_base_bdevs_operational": 4, 00:14:36.902 "base_bdevs_list": [ 00:14:36.902 { 00:14:36.902 "name": "BaseBdev1", 00:14:36.902 "uuid": "104a4c78-3480-50bd-9d9b-7d4bba762c18", 00:14:36.902 "is_configured": true, 00:14:36.902 "data_offset": 2048, 00:14:36.902 "data_size": 63488 00:14:36.902 }, 00:14:36.902 { 00:14:36.902 "name": "BaseBdev2", 00:14:36.902 "uuid": "8eccb1cf-1bb3-5ff5-a1f4-94d378695cdd", 00:14:36.902 "is_configured": true, 00:14:36.902 "data_offset": 2048, 00:14:36.902 "data_size": 63488 00:14:36.902 }, 00:14:36.902 { 00:14:36.902 "name": "BaseBdev3", 00:14:36.902 "uuid": "6ed2148c-5ba4-5d8a-bc42-d150ea6dbe13", 00:14:36.902 "is_configured": true, 00:14:36.902 "data_offset": 2048, 00:14:36.902 "data_size": 63488 00:14:36.902 }, 00:14:36.902 { 00:14:36.902 "name": "BaseBdev4", 00:14:36.902 "uuid": "ffe5a171-5f69-54de-a30f-0c2a2f809dd3", 00:14:36.902 "is_configured": true, 00:14:36.902 "data_offset": 2048, 00:14:36.902 "data_size": 63488 00:14:36.902 } 00:14:36.902 ] 00:14:36.902 }' 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.902 13:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.160 13:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.160 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.160 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.160 [2024-10-01 13:48:47.341917] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.160 [2024-10-01 13:48:47.341971] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.160 [2024-10-01 13:48:47.344874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.160 [2024-10-01 13:48:47.344964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.160 [2024-10-01 13:48:47.345110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.160 [2024-10-01 13:48:47.345129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:37.160 { 00:14:37.160 "results": [ 00:14:37.160 { 00:14:37.160 "job": "raid_bdev1", 00:14:37.160 "core_mask": "0x1", 00:14:37.160 "workload": "randrw", 00:14:37.160 "percentage": 50, 00:14:37.160 "status": "finished", 00:14:37.160 "queue_depth": 1, 00:14:37.160 "io_size": 131072, 00:14:37.160 "runtime": 1.362759, 00:14:37.160 "iops": 7511.966532600409, 00:14:37.160 "mibps": 938.9958165750511, 00:14:37.160 "io_failed": 0, 00:14:37.160 "io_timeout": 0, 00:14:37.160 "avg_latency_us": 130.1490733864441, 00:14:37.160 "min_latency_us": 25.29156626506024, 00:14:37.160 "max_latency_us": 1612.0803212851406 00:14:37.161 } 00:14:37.161 ], 00:14:37.161 "core_count": 1 00:14:37.161 } 00:14:37.161 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.161 13:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74931 00:14:37.161 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74931 ']' 00:14:37.161 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74931 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74931 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.420 killing process with pid 74931 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74931' 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74931 00:14:37.420 13:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74931 00:14:37.420 [2024-10-01 13:48:47.402338] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.679 [2024-10-01 13:48:47.768623] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.576 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v3fIPtNrkN 00:14:39.576 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:39.576 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:39.576 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:39.576 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:39.576 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.577 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.577 13:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:39.577 00:14:39.577 real 0m5.091s 00:14:39.577 user 0m5.810s 00:14:39.577 sys 0m0.709s 00:14:39.577 13:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.577 13:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.577 ************************************ 00:14:39.577 END TEST raid_read_error_test 00:14:39.577 ************************************ 00:14:39.577 13:48:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:39.577 13:48:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:39.577 13:48:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.577 13:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.577 ************************************ 00:14:39.577 START TEST raid_write_error_test 00:14:39.577 ************************************ 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.t2LofWp6Sr 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75082 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75082 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75082 ']' 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.577 13:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.577 [2024-10-01 13:48:49.479322] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:39.577 [2024-10-01 13:48:49.480022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75082 ] 00:14:39.577 [2024-10-01 13:48:49.646922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.836 [2024-10-01 13:48:49.919684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.096 [2024-10-01 13:48:50.144456] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.096 [2024-10-01 13:48:50.144508] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 BaseBdev1_malloc 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.353 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 true 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 [2024-10-01 13:48:50.424026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:40.354 [2024-10-01 13:48:50.424105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.354 [2024-10-01 13:48:50.424128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:40.354 [2024-10-01 13:48:50.424144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.354 [2024-10-01 13:48:50.426782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.354 [2024-10-01 13:48:50.426833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:40.354 BaseBdev1 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 BaseBdev2_malloc 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 true 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 [2024-10-01 13:48:50.509694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:40.354 [2024-10-01 13:48:50.509775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.354 [2024-10-01 13:48:50.509796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:40.354 [2024-10-01 13:48:50.509815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.354 [2024-10-01 13:48:50.513139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.354 [2024-10-01 13:48:50.513193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:40.354 BaseBdev2 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.354 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 BaseBdev3_malloc 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 true 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.612 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 [2024-10-01 13:48:50.583824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:40.612 [2024-10-01 13:48:50.583901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.612 [2024-10-01 13:48:50.583923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:40.613 [2024-10-01 13:48:50.583939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.613 [2024-10-01 13:48:50.587063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.613 [2024-10-01 13:48:50.587112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:40.613 BaseBdev3 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.613 BaseBdev4_malloc 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.613 true 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.613 [2024-10-01 13:48:50.658350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:40.613 [2024-10-01 13:48:50.658457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.613 [2024-10-01 13:48:50.658482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:40.613 [2024-10-01 13:48:50.658500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.613 [2024-10-01 13:48:50.661626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.613 [2024-10-01 13:48:50.661686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:40.613 BaseBdev4 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.613 [2024-10-01 13:48:50.670506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.613 [2024-10-01 13:48:50.673385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.613 [2024-10-01 13:48:50.673499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.613 [2024-10-01 13:48:50.673566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.613 [2024-10-01 13:48:50.673807] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:40.613 [2024-10-01 13:48:50.673831] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:40.613 [2024-10-01 13:48:50.674128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:40.613 [2024-10-01 13:48:50.674335] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:40.613 [2024-10-01 13:48:50.674363] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:40.613 [2024-10-01 13:48:50.674648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.613 "name": "raid_bdev1", 00:14:40.613 "uuid": "a7d2a09a-b6b6-4c9d-8624-c3dbefaec075", 00:14:40.613 "strip_size_kb": 0, 00:14:40.613 "state": "online", 00:14:40.613 "raid_level": "raid1", 00:14:40.613 "superblock": true, 00:14:40.613 "num_base_bdevs": 4, 00:14:40.613 "num_base_bdevs_discovered": 4, 00:14:40.613 "num_base_bdevs_operational": 4, 00:14:40.613 "base_bdevs_list": [ 00:14:40.613 { 00:14:40.613 "name": "BaseBdev1", 00:14:40.613 "uuid": "fafa8648-b866-56a7-8bbe-a77110a7217f", 00:14:40.613 "is_configured": true, 00:14:40.613 "data_offset": 2048, 00:14:40.613 "data_size": 63488 00:14:40.613 }, 00:14:40.613 { 00:14:40.613 "name": "BaseBdev2", 00:14:40.613 "uuid": "05c0e5fe-e104-54b8-8308-6777ce9ca762", 00:14:40.613 "is_configured": true, 00:14:40.613 "data_offset": 2048, 00:14:40.613 "data_size": 63488 00:14:40.613 }, 00:14:40.613 { 00:14:40.613 "name": "BaseBdev3", 00:14:40.613 "uuid": "93482a9d-2fdb-5570-b4a7-60c0a36e18b6", 00:14:40.613 "is_configured": true, 00:14:40.613 "data_offset": 2048, 00:14:40.613 "data_size": 63488 00:14:40.613 }, 00:14:40.613 { 00:14:40.613 "name": "BaseBdev4", 00:14:40.613 "uuid": "49608bcf-1186-5588-a44a-4e40c052ecb8", 00:14:40.613 "is_configured": true, 00:14:40.613 "data_offset": 2048, 00:14:40.613 "data_size": 63488 00:14:40.613 } 00:14:40.613 ] 00:14:40.613 }' 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.613 13:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.180 13:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:41.180 13:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:41.180 [2024-10-01 13:48:51.195725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.115 [2024-10-01 13:48:52.108329] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:42.115 [2024-10-01 13:48:52.108447] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.115 [2024-10-01 13:48:52.108685] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.115 "name": "raid_bdev1", 00:14:42.115 "uuid": "a7d2a09a-b6b6-4c9d-8624-c3dbefaec075", 00:14:42.115 "strip_size_kb": 0, 00:14:42.115 "state": "online", 00:14:42.115 "raid_level": "raid1", 00:14:42.115 "superblock": true, 00:14:42.115 "num_base_bdevs": 4, 00:14:42.115 "num_base_bdevs_discovered": 3, 00:14:42.115 "num_base_bdevs_operational": 3, 00:14:42.115 "base_bdevs_list": [ 00:14:42.115 { 00:14:42.115 "name": null, 00:14:42.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.115 "is_configured": false, 00:14:42.115 "data_offset": 0, 00:14:42.115 "data_size": 63488 00:14:42.115 }, 00:14:42.115 { 00:14:42.115 "name": "BaseBdev2", 00:14:42.115 "uuid": "05c0e5fe-e104-54b8-8308-6777ce9ca762", 00:14:42.115 "is_configured": true, 00:14:42.115 "data_offset": 2048, 00:14:42.115 "data_size": 63488 00:14:42.115 }, 00:14:42.115 { 00:14:42.115 "name": "BaseBdev3", 00:14:42.115 "uuid": "93482a9d-2fdb-5570-b4a7-60c0a36e18b6", 00:14:42.115 "is_configured": true, 00:14:42.115 "data_offset": 2048, 00:14:42.115 "data_size": 63488 00:14:42.115 }, 00:14:42.115 { 00:14:42.115 "name": "BaseBdev4", 00:14:42.115 "uuid": "49608bcf-1186-5588-a44a-4e40c052ecb8", 00:14:42.115 "is_configured": true, 00:14:42.115 "data_offset": 2048, 00:14:42.115 "data_size": 63488 00:14:42.115 } 00:14:42.115 ] 00:14:42.115 }' 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.115 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.374 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.374 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.374 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.633 [2024-10-01 13:48:52.568545] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.633 [2024-10-01 13:48:52.568596] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.633 [2024-10-01 13:48:52.571473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.633 [2024-10-01 13:48:52.571529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.633 [2024-10-01 13:48:52.571640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.633 [2024-10-01 13:48:52.571659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:42.633 { 00:14:42.633 "results": [ 00:14:42.633 { 00:14:42.633 "job": "raid_bdev1", 00:14:42.633 "core_mask": "0x1", 00:14:42.633 "workload": "randrw", 00:14:42.633 "percentage": 50, 00:14:42.633 "status": "finished", 00:14:42.633 "queue_depth": 1, 00:14:42.633 "io_size": 131072, 00:14:42.633 "runtime": 1.372233, 00:14:42.633 "iops": 10949.306713947268, 00:14:42.633 "mibps": 1368.6633392434085, 00:14:42.633 "io_failed": 0, 00:14:42.633 "io_timeout": 0, 00:14:42.633 "avg_latency_us": 88.41409880453594, 00:14:42.633 "min_latency_us": 25.39437751004016, 00:14:42.633 "max_latency_us": 1519.9614457831326 00:14:42.633 } 00:14:42.633 ], 00:14:42.633 "core_count": 1 00:14:42.633 } 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75082 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75082 ']' 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75082 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75082 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75082' 00:14:42.633 killing process with pid 75082 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75082 00:14:42.633 [2024-10-01 13:48:52.610523] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.633 13:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75082 00:14:42.892 [2024-10-01 13:48:52.966989] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.t2LofWp6Sr 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:44.268 00:14:44.268 real 0m5.014s 00:14:44.268 user 0m5.761s 00:14:44.268 sys 0m0.746s 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.268 13:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.268 ************************************ 00:14:44.268 END TEST raid_write_error_test 00:14:44.268 ************************************ 00:14:44.268 13:48:54 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:44.268 13:48:54 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:44.268 13:48:54 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:44.268 13:48:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:44.268 13:48:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.268 13:48:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.268 ************************************ 00:14:44.268 START TEST raid_rebuild_test 00:14:44.268 ************************************ 00:14:44.268 13:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:14:44.268 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:44.268 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:44.268 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75228 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75228 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75228 ']' 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.269 13:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.527 13:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.527 13:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.527 13:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.527 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.527 Zero copy mechanism will not be used. 00:14:44.527 [2024-10-01 13:48:54.548549] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:44.527 [2024-10-01 13:48:54.548703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75228 ] 00:14:44.785 [2024-10-01 13:48:54.723674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.785 [2024-10-01 13:48:54.931141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.043 [2024-10-01 13:48:55.144168] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.043 [2024-10-01 13:48:55.144220] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.301 BaseBdev1_malloc 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.301 [2024-10-01 13:48:55.454613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:45.301 [2024-10-01 13:48:55.454708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.301 [2024-10-01 13:48:55.454735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:45.301 [2024-10-01 13:48:55.454755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.301 [2024-10-01 13:48:55.457309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.301 [2024-10-01 13:48:55.457356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:45.301 BaseBdev1 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.301 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 BaseBdev2_malloc 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 [2024-10-01 13:48:55.522581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:45.560 [2024-10-01 13:48:55.522652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.560 [2024-10-01 13:48:55.522673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:45.560 [2024-10-01 13:48:55.522691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.560 [2024-10-01 13:48:55.525099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.560 [2024-10-01 13:48:55.525144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:45.560 BaseBdev2 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 spare_malloc 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 spare_delay 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 [2024-10-01 13:48:55.593674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.560 [2024-10-01 13:48:55.593750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.560 [2024-10-01 13:48:55.593772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:45.560 [2024-10-01 13:48:55.593787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.560 [2024-10-01 13:48:55.596252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.560 [2024-10-01 13:48:55.596301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.560 spare 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 [2024-10-01 13:48:55.605696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.560 [2024-10-01 13:48:55.607818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.560 [2024-10-01 13:48:55.607912] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:45.560 [2024-10-01 13:48:55.607927] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:45.560 [2024-10-01 13:48:55.608214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:45.560 [2024-10-01 13:48:55.608355] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:45.560 [2024-10-01 13:48:55.608375] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:45.560 [2024-10-01 13:48:55.608554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.560 "name": "raid_bdev1", 00:14:45.560 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:45.560 "strip_size_kb": 0, 00:14:45.560 "state": "online", 00:14:45.560 "raid_level": "raid1", 00:14:45.560 "superblock": false, 00:14:45.560 "num_base_bdevs": 2, 00:14:45.560 "num_base_bdevs_discovered": 2, 00:14:45.560 "num_base_bdevs_operational": 2, 00:14:45.560 "base_bdevs_list": [ 00:14:45.560 { 00:14:45.560 "name": "BaseBdev1", 00:14:45.560 "uuid": "cd268075-df54-5393-9e47-249a61c9d99c", 00:14:45.560 "is_configured": true, 00:14:45.560 "data_offset": 0, 00:14:45.560 "data_size": 65536 00:14:45.560 }, 00:14:45.560 { 00:14:45.560 "name": "BaseBdev2", 00:14:45.560 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:45.560 "is_configured": true, 00:14:45.560 "data_offset": 0, 00:14:45.560 "data_size": 65536 00:14:45.560 } 00:14:45.560 ] 00:14:45.560 }' 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.560 13:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.145 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:46.145 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.145 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.145 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:46.145 [2024-10-01 13:48:56.069466] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.145 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.145 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.146 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:46.405 [2024-10-01 13:48:56.424746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:46.405 /dev/nbd0 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.405 1+0 records in 00:14:46.405 1+0 records out 00:14:46.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329457 s, 12.4 MB/s 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:46.405 13:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:52.967 65536+0 records in 00:14:52.967 65536+0 records out 00:14:52.967 33554432 bytes (34 MB, 32 MiB) copied, 5.91041 s, 5.7 MB/s 00:14:52.967 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:52.967 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.967 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:52.967 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.967 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:52.967 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:52.968 [2024-10-01 13:49:02.614785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.968 [2024-10-01 13:49:02.648394] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.968 "name": "raid_bdev1", 00:14:52.968 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:52.968 "strip_size_kb": 0, 00:14:52.968 "state": "online", 00:14:52.968 "raid_level": "raid1", 00:14:52.968 "superblock": false, 00:14:52.968 "num_base_bdevs": 2, 00:14:52.968 "num_base_bdevs_discovered": 1, 00:14:52.968 "num_base_bdevs_operational": 1, 00:14:52.968 "base_bdevs_list": [ 00:14:52.968 { 00:14:52.968 "name": null, 00:14:52.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.968 "is_configured": false, 00:14:52.968 "data_offset": 0, 00:14:52.968 "data_size": 65536 00:14:52.968 }, 00:14:52.968 { 00:14:52.968 "name": "BaseBdev2", 00:14:52.968 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:52.968 "is_configured": true, 00:14:52.968 "data_offset": 0, 00:14:52.968 "data_size": 65536 00:14:52.968 } 00:14:52.968 ] 00:14:52.968 }' 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.968 13:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.968 13:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.968 13:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.968 13:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.968 [2024-10-01 13:49:03.019926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.968 [2024-10-01 13:49:03.037568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:52.968 13:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.968 13:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:52.968 [2024-10-01 13:49:03.039898] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.906 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.906 "name": "raid_bdev1", 00:14:53.906 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:53.906 "strip_size_kb": 0, 00:14:53.906 "state": "online", 00:14:53.906 "raid_level": "raid1", 00:14:53.906 "superblock": false, 00:14:53.906 "num_base_bdevs": 2, 00:14:53.906 "num_base_bdevs_discovered": 2, 00:14:53.906 "num_base_bdevs_operational": 2, 00:14:53.906 "process": { 00:14:53.906 "type": "rebuild", 00:14:53.906 "target": "spare", 00:14:53.906 "progress": { 00:14:53.906 "blocks": 20480, 00:14:53.906 "percent": 31 00:14:53.906 } 00:14:53.906 }, 00:14:53.906 "base_bdevs_list": [ 00:14:53.907 { 00:14:53.907 "name": "spare", 00:14:53.907 "uuid": "db39c0fe-633f-566f-b6cf-a2d8e753379a", 00:14:53.907 "is_configured": true, 00:14:53.907 "data_offset": 0, 00:14:53.907 "data_size": 65536 00:14:53.907 }, 00:14:53.907 { 00:14:53.907 "name": "BaseBdev2", 00:14:53.907 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:53.907 "is_configured": true, 00:14:53.907 "data_offset": 0, 00:14:53.907 "data_size": 65536 00:14:53.907 } 00:14:53.907 ] 00:14:53.907 }' 00:14:53.907 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.165 [2024-10-01 13:49:04.179693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.165 [2024-10-01 13:49:04.246874] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:54.165 [2024-10-01 13:49:04.246992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.165 [2024-10-01 13:49:04.247014] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.165 [2024-10-01 13:49:04.247030] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.165 "name": "raid_bdev1", 00:14:54.165 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:54.165 "strip_size_kb": 0, 00:14:54.165 "state": "online", 00:14:54.165 "raid_level": "raid1", 00:14:54.165 "superblock": false, 00:14:54.165 "num_base_bdevs": 2, 00:14:54.165 "num_base_bdevs_discovered": 1, 00:14:54.165 "num_base_bdevs_operational": 1, 00:14:54.165 "base_bdevs_list": [ 00:14:54.165 { 00:14:54.165 "name": null, 00:14:54.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.165 "is_configured": false, 00:14:54.165 "data_offset": 0, 00:14:54.165 "data_size": 65536 00:14:54.165 }, 00:14:54.165 { 00:14:54.165 "name": "BaseBdev2", 00:14:54.165 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:54.165 "is_configured": true, 00:14:54.165 "data_offset": 0, 00:14:54.165 "data_size": 65536 00:14:54.165 } 00:14:54.165 ] 00:14:54.165 }' 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.165 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.732 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.732 "name": "raid_bdev1", 00:14:54.732 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:54.732 "strip_size_kb": 0, 00:14:54.732 "state": "online", 00:14:54.732 "raid_level": "raid1", 00:14:54.732 "superblock": false, 00:14:54.732 "num_base_bdevs": 2, 00:14:54.732 "num_base_bdevs_discovered": 1, 00:14:54.732 "num_base_bdevs_operational": 1, 00:14:54.732 "base_bdevs_list": [ 00:14:54.732 { 00:14:54.732 "name": null, 00:14:54.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.733 "is_configured": false, 00:14:54.733 "data_offset": 0, 00:14:54.733 "data_size": 65536 00:14:54.733 }, 00:14:54.733 { 00:14:54.733 "name": "BaseBdev2", 00:14:54.733 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:54.733 "is_configured": true, 00:14:54.733 "data_offset": 0, 00:14:54.733 "data_size": 65536 00:14:54.733 } 00:14:54.733 ] 00:14:54.733 }' 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.733 [2024-10-01 13:49:04.843110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.733 [2024-10-01 13:49:04.860363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.733 13:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:54.733 [2024-10-01 13:49:04.863003] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.111 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.111 "name": "raid_bdev1", 00:14:56.111 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:56.111 "strip_size_kb": 0, 00:14:56.111 "state": "online", 00:14:56.112 "raid_level": "raid1", 00:14:56.112 "superblock": false, 00:14:56.112 "num_base_bdevs": 2, 00:14:56.112 "num_base_bdevs_discovered": 2, 00:14:56.112 "num_base_bdevs_operational": 2, 00:14:56.112 "process": { 00:14:56.112 "type": "rebuild", 00:14:56.112 "target": "spare", 00:14:56.112 "progress": { 00:14:56.112 "blocks": 20480, 00:14:56.112 "percent": 31 00:14:56.112 } 00:14:56.112 }, 00:14:56.112 "base_bdevs_list": [ 00:14:56.112 { 00:14:56.112 "name": "spare", 00:14:56.112 "uuid": "db39c0fe-633f-566f-b6cf-a2d8e753379a", 00:14:56.112 "is_configured": true, 00:14:56.112 "data_offset": 0, 00:14:56.112 "data_size": 65536 00:14:56.112 }, 00:14:56.112 { 00:14:56.112 "name": "BaseBdev2", 00:14:56.112 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:56.112 "is_configured": true, 00:14:56.112 "data_offset": 0, 00:14:56.112 "data_size": 65536 00:14:56.112 } 00:14:56.112 ] 00:14:56.112 }' 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=381 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.112 13:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.112 "name": "raid_bdev1", 00:14:56.112 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:56.112 "strip_size_kb": 0, 00:14:56.112 "state": "online", 00:14:56.112 "raid_level": "raid1", 00:14:56.112 "superblock": false, 00:14:56.112 "num_base_bdevs": 2, 00:14:56.112 "num_base_bdevs_discovered": 2, 00:14:56.112 "num_base_bdevs_operational": 2, 00:14:56.112 "process": { 00:14:56.112 "type": "rebuild", 00:14:56.112 "target": "spare", 00:14:56.112 "progress": { 00:14:56.112 "blocks": 22528, 00:14:56.112 "percent": 34 00:14:56.112 } 00:14:56.112 }, 00:14:56.112 "base_bdevs_list": [ 00:14:56.112 { 00:14:56.112 "name": "spare", 00:14:56.112 "uuid": "db39c0fe-633f-566f-b6cf-a2d8e753379a", 00:14:56.112 "is_configured": true, 00:14:56.112 "data_offset": 0, 00:14:56.112 "data_size": 65536 00:14:56.112 }, 00:14:56.112 { 00:14:56.112 "name": "BaseBdev2", 00:14:56.112 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:56.112 "is_configured": true, 00:14:56.112 "data_offset": 0, 00:14:56.112 "data_size": 65536 00:14:56.112 } 00:14:56.112 ] 00:14:56.112 }' 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.112 13:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.049 "name": "raid_bdev1", 00:14:57.049 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:57.049 "strip_size_kb": 0, 00:14:57.049 "state": "online", 00:14:57.049 "raid_level": "raid1", 00:14:57.049 "superblock": false, 00:14:57.049 "num_base_bdevs": 2, 00:14:57.049 "num_base_bdevs_discovered": 2, 00:14:57.049 "num_base_bdevs_operational": 2, 00:14:57.049 "process": { 00:14:57.049 "type": "rebuild", 00:14:57.049 "target": "spare", 00:14:57.049 "progress": { 00:14:57.049 "blocks": 45056, 00:14:57.049 "percent": 68 00:14:57.049 } 00:14:57.049 }, 00:14:57.049 "base_bdevs_list": [ 00:14:57.049 { 00:14:57.049 "name": "spare", 00:14:57.049 "uuid": "db39c0fe-633f-566f-b6cf-a2d8e753379a", 00:14:57.049 "is_configured": true, 00:14:57.049 "data_offset": 0, 00:14:57.049 "data_size": 65536 00:14:57.049 }, 00:14:57.049 { 00:14:57.049 "name": "BaseBdev2", 00:14:57.049 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:57.049 "is_configured": true, 00:14:57.049 "data_offset": 0, 00:14:57.049 "data_size": 65536 00:14:57.049 } 00:14:57.049 ] 00:14:57.049 }' 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.049 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.308 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.308 13:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.244 [2024-10-01 13:49:08.091160] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:58.244 [2024-10-01 13:49:08.091265] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:58.244 [2024-10-01 13:49:08.091326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.244 "name": "raid_bdev1", 00:14:58.244 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:58.244 "strip_size_kb": 0, 00:14:58.244 "state": "online", 00:14:58.244 "raid_level": "raid1", 00:14:58.244 "superblock": false, 00:14:58.244 "num_base_bdevs": 2, 00:14:58.244 "num_base_bdevs_discovered": 2, 00:14:58.244 "num_base_bdevs_operational": 2, 00:14:58.244 "base_bdevs_list": [ 00:14:58.244 { 00:14:58.244 "name": "spare", 00:14:58.244 "uuid": "db39c0fe-633f-566f-b6cf-a2d8e753379a", 00:14:58.244 "is_configured": true, 00:14:58.244 "data_offset": 0, 00:14:58.244 "data_size": 65536 00:14:58.244 }, 00:14:58.244 { 00:14:58.244 "name": "BaseBdev2", 00:14:58.244 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:58.244 "is_configured": true, 00:14:58.244 "data_offset": 0, 00:14:58.244 "data_size": 65536 00:14:58.244 } 00:14:58.244 ] 00:14:58.244 }' 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.244 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.503 "name": "raid_bdev1", 00:14:58.503 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:58.503 "strip_size_kb": 0, 00:14:58.503 "state": "online", 00:14:58.503 "raid_level": "raid1", 00:14:58.503 "superblock": false, 00:14:58.503 "num_base_bdevs": 2, 00:14:58.503 "num_base_bdevs_discovered": 2, 00:14:58.503 "num_base_bdevs_operational": 2, 00:14:58.503 "base_bdevs_list": [ 00:14:58.503 { 00:14:58.503 "name": "spare", 00:14:58.503 "uuid": "db39c0fe-633f-566f-b6cf-a2d8e753379a", 00:14:58.503 "is_configured": true, 00:14:58.503 "data_offset": 0, 00:14:58.503 "data_size": 65536 00:14:58.503 }, 00:14:58.503 { 00:14:58.503 "name": "BaseBdev2", 00:14:58.503 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:58.503 "is_configured": true, 00:14:58.503 "data_offset": 0, 00:14:58.503 "data_size": 65536 00:14:58.503 } 00:14:58.503 ] 00:14:58.503 }' 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.503 "name": "raid_bdev1", 00:14:58.503 "uuid": "ffac4e1f-a536-4802-b712-0b50861f8347", 00:14:58.503 "strip_size_kb": 0, 00:14:58.503 "state": "online", 00:14:58.503 "raid_level": "raid1", 00:14:58.503 "superblock": false, 00:14:58.503 "num_base_bdevs": 2, 00:14:58.503 "num_base_bdevs_discovered": 2, 00:14:58.503 "num_base_bdevs_operational": 2, 00:14:58.503 "base_bdevs_list": [ 00:14:58.503 { 00:14:58.503 "name": "spare", 00:14:58.503 "uuid": "db39c0fe-633f-566f-b6cf-a2d8e753379a", 00:14:58.503 "is_configured": true, 00:14:58.503 "data_offset": 0, 00:14:58.503 "data_size": 65536 00:14:58.503 }, 00:14:58.503 { 00:14:58.503 "name": "BaseBdev2", 00:14:58.503 "uuid": "0e2a8556-f0cd-5ab0-b0da-d44da6c69a3a", 00:14:58.503 "is_configured": true, 00:14:58.503 "data_offset": 0, 00:14:58.503 "data_size": 65536 00:14:58.503 } 00:14:58.503 ] 00:14:58.503 }' 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.503 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.763 [2024-10-01 13:49:08.896354] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.763 [2024-10-01 13:49:08.896422] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.763 [2024-10-01 13:49:08.896533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.763 [2024-10-01 13:49:08.896636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.763 [2024-10-01 13:49:08.896649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.763 13:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:59.021 /dev/nbd0 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.021 1+0 records in 00:14:59.021 1+0 records out 00:14:59.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467791 s, 8.8 MB/s 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:59.021 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.282 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:59.282 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:59.282 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.282 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.282 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:59.282 /dev/nbd1 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.541 1+0 records in 00:14:59.541 1+0 records out 00:14:59.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362171 s, 11.3 MB/s 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.541 13:49:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:59.801 13:49:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:59.801 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.801 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.801 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.801 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:59.801 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.801 13:49:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.059 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75228 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75228 ']' 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75228 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75228 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.317 killing process with pid 75228 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75228' 00:15:00.317 Received shutdown signal, test time was about 60.000000 seconds 00:15:00.317 00:15:00.317 Latency(us) 00:15:00.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.317 =================================================================================================================== 00:15:00.317 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75228 00:15:00.317 [2024-10-01 13:49:10.296549] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.317 13:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75228 00:15:00.574 [2024-10-01 13:49:10.624002] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.950 13:49:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:01.950 00:15:01.950 real 0m17.497s 00:15:01.950 user 0m18.398s 00:15:01.950 sys 0m4.456s 00:15:01.950 13:49:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.950 13:49:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 ************************************ 00:15:01.950 END TEST raid_rebuild_test 00:15:01.950 ************************************ 00:15:01.950 13:49:12 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:01.950 13:49:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:01.950 13:49:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.950 13:49:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 ************************************ 00:15:01.950 START TEST raid_rebuild_test_sb 00:15:01.950 ************************************ 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75703 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75703 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75703 ']' 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.950 13:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.951 13:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.951 13:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.951 13:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.210 [2024-10-01 13:49:12.156853] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:02.210 [2024-10-01 13:49:12.157239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:02.210 Zero copy mechanism will not be used. 00:15:02.210 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75703 ] 00:15:02.210 [2024-10-01 13:49:12.326662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.469 [2024-10-01 13:49:12.547421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.727 [2024-10-01 13:49:12.751702] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.727 [2024-10-01 13:49:12.751747] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.986 BaseBdev1_malloc 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.986 [2024-10-01 13:49:13.064436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:02.986 [2024-10-01 13:49:13.064539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.986 [2024-10-01 13:49:13.064564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:02.986 [2024-10-01 13:49:13.064587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.986 [2024-10-01 13:49:13.067628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.986 [2024-10-01 13:49:13.067801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:02.986 BaseBdev1 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.986 BaseBdev2_malloc 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.986 [2024-10-01 13:49:13.134109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:02.986 [2024-10-01 13:49:13.134223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.986 [2024-10-01 13:49:13.134259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:02.986 [2024-10-01 13:49:13.134277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.986 [2024-10-01 13:49:13.137508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.986 [2024-10-01 13:49:13.137557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:02.986 BaseBdev2 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.986 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.246 spare_malloc 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.246 spare_delay 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.246 [2024-10-01 13:49:13.206212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:03.246 [2024-10-01 13:49:13.206306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.246 [2024-10-01 13:49:13.206333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:03.246 [2024-10-01 13:49:13.206349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.246 [2024-10-01 13:49:13.209724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.246 spare 00:15:03.246 [2024-10-01 13:49:13.210028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.246 [2024-10-01 13:49:13.218306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.246 [2024-10-01 13:49:13.221114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.246 [2024-10-01 13:49:13.221520] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.246 [2024-10-01 13:49:13.221546] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:03.246 [2024-10-01 13:49:13.221903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:03.246 [2024-10-01 13:49:13.222103] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.246 [2024-10-01 13:49:13.222115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.246 [2024-10-01 13:49:13.222354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.246 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.246 "name": "raid_bdev1", 00:15:03.246 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:03.246 "strip_size_kb": 0, 00:15:03.246 "state": "online", 00:15:03.246 "raid_level": "raid1", 00:15:03.246 "superblock": true, 00:15:03.246 "num_base_bdevs": 2, 00:15:03.246 "num_base_bdevs_discovered": 2, 00:15:03.246 "num_base_bdevs_operational": 2, 00:15:03.246 "base_bdevs_list": [ 00:15:03.246 { 00:15:03.246 "name": "BaseBdev1", 00:15:03.246 "uuid": "22fbeae5-afb5-5a7f-a1c4-b7c730aeb308", 00:15:03.246 "is_configured": true, 00:15:03.247 "data_offset": 2048, 00:15:03.247 "data_size": 63488 00:15:03.247 }, 00:15:03.247 { 00:15:03.247 "name": "BaseBdev2", 00:15:03.247 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:03.247 "is_configured": true, 00:15:03.247 "data_offset": 2048, 00:15:03.247 "data_size": 63488 00:15:03.247 } 00:15:03.247 ] 00:15:03.247 }' 00:15:03.247 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.247 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.507 [2024-10-01 13:49:13.650086] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.507 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.767 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:03.767 [2024-10-01 13:49:13.921943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:03.767 /dev/nbd0 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.026 1+0 records in 00:15:04.026 1+0 records out 00:15:04.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413081 s, 9.9 MB/s 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:04.026 13:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:09.300 63488+0 records in 00:15:09.300 63488+0 records out 00:15:09.300 32505856 bytes (33 MB, 31 MiB) copied, 5.45983 s, 6.0 MB/s 00:15:09.300 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:09.300 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.300 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:09.300 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.300 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:09.300 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.300 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.559 [2024-10-01 13:49:19.700671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.559 [2024-10-01 13:49:19.719961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.559 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.560 13:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.821 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.821 "name": "raid_bdev1", 00:15:09.821 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:09.821 "strip_size_kb": 0, 00:15:09.821 "state": "online", 00:15:09.821 "raid_level": "raid1", 00:15:09.821 "superblock": true, 00:15:09.821 "num_base_bdevs": 2, 00:15:09.821 "num_base_bdevs_discovered": 1, 00:15:09.821 "num_base_bdevs_operational": 1, 00:15:09.821 "base_bdevs_list": [ 00:15:09.821 { 00:15:09.821 "name": null, 00:15:09.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.821 "is_configured": false, 00:15:09.821 "data_offset": 0, 00:15:09.821 "data_size": 63488 00:15:09.821 }, 00:15:09.821 { 00:15:09.821 "name": "BaseBdev2", 00:15:09.821 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:09.821 "is_configured": true, 00:15:09.821 "data_offset": 2048, 00:15:09.821 "data_size": 63488 00:15:09.821 } 00:15:09.821 ] 00:15:09.821 }' 00:15:09.821 13:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.821 13:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.079 13:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.079 13:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.079 13:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.079 [2024-10-01 13:49:20.191652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.079 [2024-10-01 13:49:20.208902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:10.079 13:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.079 13:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:10.079 [2024-10-01 13:49:20.211273] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.455 "name": "raid_bdev1", 00:15:11.455 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:11.455 "strip_size_kb": 0, 00:15:11.455 "state": "online", 00:15:11.455 "raid_level": "raid1", 00:15:11.455 "superblock": true, 00:15:11.455 "num_base_bdevs": 2, 00:15:11.455 "num_base_bdevs_discovered": 2, 00:15:11.455 "num_base_bdevs_operational": 2, 00:15:11.455 "process": { 00:15:11.455 "type": "rebuild", 00:15:11.455 "target": "spare", 00:15:11.455 "progress": { 00:15:11.455 "blocks": 20480, 00:15:11.455 "percent": 32 00:15:11.455 } 00:15:11.455 }, 00:15:11.455 "base_bdevs_list": [ 00:15:11.455 { 00:15:11.455 "name": "spare", 00:15:11.455 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:11.455 "is_configured": true, 00:15:11.455 "data_offset": 2048, 00:15:11.455 "data_size": 63488 00:15:11.455 }, 00:15:11.455 { 00:15:11.455 "name": "BaseBdev2", 00:15:11.455 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:11.455 "is_configured": true, 00:15:11.455 "data_offset": 2048, 00:15:11.455 "data_size": 63488 00:15:11.455 } 00:15:11.455 ] 00:15:11.455 }' 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.455 [2024-10-01 13:49:21.366513] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.455 [2024-10-01 13:49:21.417471] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.455 [2024-10-01 13:49:21.417838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.455 [2024-10-01 13:49:21.418040] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.455 [2024-10-01 13:49:21.418151] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.455 "name": "raid_bdev1", 00:15:11.455 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:11.455 "strip_size_kb": 0, 00:15:11.455 "state": "online", 00:15:11.455 "raid_level": "raid1", 00:15:11.455 "superblock": true, 00:15:11.455 "num_base_bdevs": 2, 00:15:11.455 "num_base_bdevs_discovered": 1, 00:15:11.455 "num_base_bdevs_operational": 1, 00:15:11.455 "base_bdevs_list": [ 00:15:11.455 { 00:15:11.455 "name": null, 00:15:11.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.455 "is_configured": false, 00:15:11.455 "data_offset": 0, 00:15:11.455 "data_size": 63488 00:15:11.455 }, 00:15:11.455 { 00:15:11.455 "name": "BaseBdev2", 00:15:11.455 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:11.455 "is_configured": true, 00:15:11.455 "data_offset": 2048, 00:15:11.455 "data_size": 63488 00:15:11.455 } 00:15:11.455 ] 00:15:11.455 }' 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.455 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.716 "name": "raid_bdev1", 00:15:11.716 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:11.716 "strip_size_kb": 0, 00:15:11.716 "state": "online", 00:15:11.716 "raid_level": "raid1", 00:15:11.716 "superblock": true, 00:15:11.716 "num_base_bdevs": 2, 00:15:11.716 "num_base_bdevs_discovered": 1, 00:15:11.716 "num_base_bdevs_operational": 1, 00:15:11.716 "base_bdevs_list": [ 00:15:11.716 { 00:15:11.716 "name": null, 00:15:11.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.716 "is_configured": false, 00:15:11.716 "data_offset": 0, 00:15:11.716 "data_size": 63488 00:15:11.716 }, 00:15:11.716 { 00:15:11.716 "name": "BaseBdev2", 00:15:11.716 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:11.716 "is_configured": true, 00:15:11.716 "data_offset": 2048, 00:15:11.716 "data_size": 63488 00:15:11.716 } 00:15:11.716 ] 00:15:11.716 }' 00:15:11.716 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.975 [2024-10-01 13:49:21.978435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.975 [2024-10-01 13:49:21.995298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.975 13:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:11.975 [2024-10-01 13:49:21.997682] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.911 "name": "raid_bdev1", 00:15:12.911 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:12.911 "strip_size_kb": 0, 00:15:12.911 "state": "online", 00:15:12.911 "raid_level": "raid1", 00:15:12.911 "superblock": true, 00:15:12.911 "num_base_bdevs": 2, 00:15:12.911 "num_base_bdevs_discovered": 2, 00:15:12.911 "num_base_bdevs_operational": 2, 00:15:12.911 "process": { 00:15:12.911 "type": "rebuild", 00:15:12.911 "target": "spare", 00:15:12.911 "progress": { 00:15:12.911 "blocks": 20480, 00:15:12.911 "percent": 32 00:15:12.911 } 00:15:12.911 }, 00:15:12.911 "base_bdevs_list": [ 00:15:12.911 { 00:15:12.911 "name": "spare", 00:15:12.911 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:12.911 "is_configured": true, 00:15:12.911 "data_offset": 2048, 00:15:12.911 "data_size": 63488 00:15:12.911 }, 00:15:12.911 { 00:15:12.911 "name": "BaseBdev2", 00:15:12.911 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:12.911 "is_configured": true, 00:15:12.911 "data_offset": 2048, 00:15:12.911 "data_size": 63488 00:15:12.911 } 00:15:12.911 ] 00:15:12.911 }' 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.911 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:13.170 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.170 "name": "raid_bdev1", 00:15:13.170 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:13.170 "strip_size_kb": 0, 00:15:13.170 "state": "online", 00:15:13.170 "raid_level": "raid1", 00:15:13.170 "superblock": true, 00:15:13.170 "num_base_bdevs": 2, 00:15:13.170 "num_base_bdevs_discovered": 2, 00:15:13.170 "num_base_bdevs_operational": 2, 00:15:13.170 "process": { 00:15:13.170 "type": "rebuild", 00:15:13.170 "target": "spare", 00:15:13.170 "progress": { 00:15:13.170 "blocks": 22528, 00:15:13.170 "percent": 35 00:15:13.170 } 00:15:13.170 }, 00:15:13.170 "base_bdevs_list": [ 00:15:13.170 { 00:15:13.170 "name": "spare", 00:15:13.170 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:13.170 "is_configured": true, 00:15:13.170 "data_offset": 2048, 00:15:13.170 "data_size": 63488 00:15:13.170 }, 00:15:13.170 { 00:15:13.170 "name": "BaseBdev2", 00:15:13.170 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:13.170 "is_configured": true, 00:15:13.170 "data_offset": 2048, 00:15:13.170 "data_size": 63488 00:15:13.170 } 00:15:13.170 ] 00:15:13.170 }' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.170 13:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.106 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.106 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.106 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.106 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.106 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.106 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.364 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.364 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.365 "name": "raid_bdev1", 00:15:14.365 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:14.365 "strip_size_kb": 0, 00:15:14.365 "state": "online", 00:15:14.365 "raid_level": "raid1", 00:15:14.365 "superblock": true, 00:15:14.365 "num_base_bdevs": 2, 00:15:14.365 "num_base_bdevs_discovered": 2, 00:15:14.365 "num_base_bdevs_operational": 2, 00:15:14.365 "process": { 00:15:14.365 "type": "rebuild", 00:15:14.365 "target": "spare", 00:15:14.365 "progress": { 00:15:14.365 "blocks": 45056, 00:15:14.365 "percent": 70 00:15:14.365 } 00:15:14.365 }, 00:15:14.365 "base_bdevs_list": [ 00:15:14.365 { 00:15:14.365 "name": "spare", 00:15:14.365 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:14.365 "is_configured": true, 00:15:14.365 "data_offset": 2048, 00:15:14.365 "data_size": 63488 00:15:14.365 }, 00:15:14.365 { 00:15:14.365 "name": "BaseBdev2", 00:15:14.365 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:14.365 "is_configured": true, 00:15:14.365 "data_offset": 2048, 00:15:14.365 "data_size": 63488 00:15:14.365 } 00:15:14.365 ] 00:15:14.365 }' 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.365 13:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.933 [2024-10-01 13:49:25.112969] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:14.933 [2024-10-01 13:49:25.113327] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:14.933 [2024-10-01 13:49:25.113570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.501 "name": "raid_bdev1", 00:15:15.501 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:15.501 "strip_size_kb": 0, 00:15:15.501 "state": "online", 00:15:15.501 "raid_level": "raid1", 00:15:15.501 "superblock": true, 00:15:15.501 "num_base_bdevs": 2, 00:15:15.501 "num_base_bdevs_discovered": 2, 00:15:15.501 "num_base_bdevs_operational": 2, 00:15:15.501 "base_bdevs_list": [ 00:15:15.501 { 00:15:15.501 "name": "spare", 00:15:15.501 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:15.501 "is_configured": true, 00:15:15.501 "data_offset": 2048, 00:15:15.501 "data_size": 63488 00:15:15.501 }, 00:15:15.501 { 00:15:15.501 "name": "BaseBdev2", 00:15:15.501 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:15.501 "is_configured": true, 00:15:15.501 "data_offset": 2048, 00:15:15.501 "data_size": 63488 00:15:15.501 } 00:15:15.501 ] 00:15:15.501 }' 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.501 "name": "raid_bdev1", 00:15:15.501 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:15.501 "strip_size_kb": 0, 00:15:15.501 "state": "online", 00:15:15.501 "raid_level": "raid1", 00:15:15.501 "superblock": true, 00:15:15.501 "num_base_bdevs": 2, 00:15:15.501 "num_base_bdevs_discovered": 2, 00:15:15.501 "num_base_bdevs_operational": 2, 00:15:15.501 "base_bdevs_list": [ 00:15:15.501 { 00:15:15.501 "name": "spare", 00:15:15.501 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:15.501 "is_configured": true, 00:15:15.501 "data_offset": 2048, 00:15:15.501 "data_size": 63488 00:15:15.501 }, 00:15:15.501 { 00:15:15.501 "name": "BaseBdev2", 00:15:15.501 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:15.501 "is_configured": true, 00:15:15.501 "data_offset": 2048, 00:15:15.501 "data_size": 63488 00:15:15.501 } 00:15:15.501 ] 00:15:15.501 }' 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.501 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.761 "name": "raid_bdev1", 00:15:15.761 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:15.761 "strip_size_kb": 0, 00:15:15.761 "state": "online", 00:15:15.761 "raid_level": "raid1", 00:15:15.761 "superblock": true, 00:15:15.761 "num_base_bdevs": 2, 00:15:15.761 "num_base_bdevs_discovered": 2, 00:15:15.761 "num_base_bdevs_operational": 2, 00:15:15.761 "base_bdevs_list": [ 00:15:15.761 { 00:15:15.761 "name": "spare", 00:15:15.761 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:15.761 "is_configured": true, 00:15:15.761 "data_offset": 2048, 00:15:15.761 "data_size": 63488 00:15:15.761 }, 00:15:15.761 { 00:15:15.761 "name": "BaseBdev2", 00:15:15.761 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:15.761 "is_configured": true, 00:15:15.761 "data_offset": 2048, 00:15:15.761 "data_size": 63488 00:15:15.761 } 00:15:15.761 ] 00:15:15.761 }' 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.761 13:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.020 [2024-10-01 13:49:26.170281] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.020 [2024-10-01 13:49:26.170325] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.020 [2024-10-01 13:49:26.170446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.020 [2024-10-01 13:49:26.170519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.020 [2024-10-01 13:49:26.170531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.020 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.278 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:16.278 /dev/nbd0 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.537 1+0 records in 00:15:16.537 1+0 records out 00:15:16.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718107 s, 5.7 MB/s 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:16.537 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:16.538 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.538 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.538 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:16.538 /dev/nbd1 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.796 1+0 records in 00:15:16.796 1+0 records out 00:15:16.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505064 s, 8.1 MB/s 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.796 13:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.056 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.315 [2024-10-01 13:49:27.449379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:17.315 [2024-10-01 13:49:27.449461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.315 [2024-10-01 13:49:27.449507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:17.315 [2024-10-01 13:49:27.449519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.315 [2024-10-01 13:49:27.452192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.315 [2024-10-01 13:49:27.452242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:17.315 [2024-10-01 13:49:27.452356] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:17.315 [2024-10-01 13:49:27.452437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.315 [2024-10-01 13:49:27.452612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.315 spare 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.315 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.574 [2024-10-01 13:49:27.552573] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:17.574 [2024-10-01 13:49:27.552646] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:17.574 [2024-10-01 13:49:27.553036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:17.574 [2024-10-01 13:49:27.553249] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:17.574 [2024-10-01 13:49:27.553261] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:17.574 [2024-10-01 13:49:27.553515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.574 "name": "raid_bdev1", 00:15:17.574 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:17.574 "strip_size_kb": 0, 00:15:17.574 "state": "online", 00:15:17.574 "raid_level": "raid1", 00:15:17.574 "superblock": true, 00:15:17.574 "num_base_bdevs": 2, 00:15:17.574 "num_base_bdevs_discovered": 2, 00:15:17.574 "num_base_bdevs_operational": 2, 00:15:17.574 "base_bdevs_list": [ 00:15:17.574 { 00:15:17.574 "name": "spare", 00:15:17.574 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:17.574 "is_configured": true, 00:15:17.574 "data_offset": 2048, 00:15:17.574 "data_size": 63488 00:15:17.574 }, 00:15:17.574 { 00:15:17.574 "name": "BaseBdev2", 00:15:17.574 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:17.574 "is_configured": true, 00:15:17.574 "data_offset": 2048, 00:15:17.574 "data_size": 63488 00:15:17.574 } 00:15:17.574 ] 00:15:17.574 }' 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.574 13:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.833 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.093 "name": "raid_bdev1", 00:15:18.093 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:18.093 "strip_size_kb": 0, 00:15:18.093 "state": "online", 00:15:18.093 "raid_level": "raid1", 00:15:18.093 "superblock": true, 00:15:18.093 "num_base_bdevs": 2, 00:15:18.093 "num_base_bdevs_discovered": 2, 00:15:18.093 "num_base_bdevs_operational": 2, 00:15:18.093 "base_bdevs_list": [ 00:15:18.093 { 00:15:18.093 "name": "spare", 00:15:18.093 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:18.093 "is_configured": true, 00:15:18.093 "data_offset": 2048, 00:15:18.093 "data_size": 63488 00:15:18.093 }, 00:15:18.093 { 00:15:18.093 "name": "BaseBdev2", 00:15:18.093 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:18.093 "is_configured": true, 00:15:18.093 "data_offset": 2048, 00:15:18.093 "data_size": 63488 00:15:18.093 } 00:15:18.093 ] 00:15:18.093 }' 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.093 [2024-10-01 13:49:28.208612] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.093 "name": "raid_bdev1", 00:15:18.093 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:18.093 "strip_size_kb": 0, 00:15:18.093 "state": "online", 00:15:18.093 "raid_level": "raid1", 00:15:18.093 "superblock": true, 00:15:18.093 "num_base_bdevs": 2, 00:15:18.093 "num_base_bdevs_discovered": 1, 00:15:18.093 "num_base_bdevs_operational": 1, 00:15:18.093 "base_bdevs_list": [ 00:15:18.093 { 00:15:18.093 "name": null, 00:15:18.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.093 "is_configured": false, 00:15:18.093 "data_offset": 0, 00:15:18.093 "data_size": 63488 00:15:18.093 }, 00:15:18.093 { 00:15:18.093 "name": "BaseBdev2", 00:15:18.093 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:18.093 "is_configured": true, 00:15:18.093 "data_offset": 2048, 00:15:18.093 "data_size": 63488 00:15:18.093 } 00:15:18.093 ] 00:15:18.093 }' 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.093 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.662 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.662 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.662 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.662 [2024-10-01 13:49:28.659977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.662 [2024-10-01 13:49:28.660190] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.662 [2024-10-01 13:49:28.660210] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:18.662 [2024-10-01 13:49:28.660257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.662 [2024-10-01 13:49:28.676865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:18.662 13:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.662 13:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:18.662 [2024-10-01 13:49:28.679437] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.599 "name": "raid_bdev1", 00:15:19.599 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:19.599 "strip_size_kb": 0, 00:15:19.599 "state": "online", 00:15:19.599 "raid_level": "raid1", 00:15:19.599 "superblock": true, 00:15:19.599 "num_base_bdevs": 2, 00:15:19.599 "num_base_bdevs_discovered": 2, 00:15:19.599 "num_base_bdevs_operational": 2, 00:15:19.599 "process": { 00:15:19.599 "type": "rebuild", 00:15:19.599 "target": "spare", 00:15:19.599 "progress": { 00:15:19.599 "blocks": 20480, 00:15:19.599 "percent": 32 00:15:19.599 } 00:15:19.599 }, 00:15:19.599 "base_bdevs_list": [ 00:15:19.599 { 00:15:19.599 "name": "spare", 00:15:19.599 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:19.599 "is_configured": true, 00:15:19.599 "data_offset": 2048, 00:15:19.599 "data_size": 63488 00:15:19.599 }, 00:15:19.599 { 00:15:19.599 "name": "BaseBdev2", 00:15:19.599 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:19.599 "is_configured": true, 00:15:19.599 "data_offset": 2048, 00:15:19.599 "data_size": 63488 00:15:19.599 } 00:15:19.599 ] 00:15:19.599 }' 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.599 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.917 [2024-10-01 13:49:29.835693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.917 [2024-10-01 13:49:29.885706] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.917 [2024-10-01 13:49:29.886041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.917 [2024-10-01 13:49:29.886064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.917 [2024-10-01 13:49:29.886078] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.917 "name": "raid_bdev1", 00:15:19.917 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:19.917 "strip_size_kb": 0, 00:15:19.917 "state": "online", 00:15:19.917 "raid_level": "raid1", 00:15:19.917 "superblock": true, 00:15:19.917 "num_base_bdevs": 2, 00:15:19.917 "num_base_bdevs_discovered": 1, 00:15:19.917 "num_base_bdevs_operational": 1, 00:15:19.917 "base_bdevs_list": [ 00:15:19.917 { 00:15:19.917 "name": null, 00:15:19.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.917 "is_configured": false, 00:15:19.917 "data_offset": 0, 00:15:19.917 "data_size": 63488 00:15:19.917 }, 00:15:19.917 { 00:15:19.917 "name": "BaseBdev2", 00:15:19.917 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:19.917 "is_configured": true, 00:15:19.917 "data_offset": 2048, 00:15:19.917 "data_size": 63488 00:15:19.917 } 00:15:19.917 ] 00:15:19.917 }' 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.917 13:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.176 13:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.176 13:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.176 13:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.176 [2024-10-01 13:49:30.347864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.176 [2024-10-01 13:49:30.347947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.176 [2024-10-01 13:49:30.347973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:20.176 [2024-10-01 13:49:30.347989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.176 [2024-10-01 13:49:30.348564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.176 [2024-10-01 13:49:30.348593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.176 [2024-10-01 13:49:30.348698] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:20.176 [2024-10-01 13:49:30.348716] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:20.176 [2024-10-01 13:49:30.348733] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:20.176 [2024-10-01 13:49:30.348760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.176 [2024-10-01 13:49:30.365256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:20.435 spare 00:15:20.435 13:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.435 13:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:20.435 [2024-10-01 13:49:30.367677] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.371 "name": "raid_bdev1", 00:15:21.371 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:21.371 "strip_size_kb": 0, 00:15:21.371 "state": "online", 00:15:21.371 "raid_level": "raid1", 00:15:21.371 "superblock": true, 00:15:21.371 "num_base_bdevs": 2, 00:15:21.371 "num_base_bdevs_discovered": 2, 00:15:21.371 "num_base_bdevs_operational": 2, 00:15:21.371 "process": { 00:15:21.371 "type": "rebuild", 00:15:21.371 "target": "spare", 00:15:21.371 "progress": { 00:15:21.371 "blocks": 20480, 00:15:21.371 "percent": 32 00:15:21.371 } 00:15:21.371 }, 00:15:21.371 "base_bdevs_list": [ 00:15:21.371 { 00:15:21.371 "name": "spare", 00:15:21.371 "uuid": "6f90b7a6-5561-5ace-82f8-0b37bf203130", 00:15:21.371 "is_configured": true, 00:15:21.371 "data_offset": 2048, 00:15:21.371 "data_size": 63488 00:15:21.371 }, 00:15:21.371 { 00:15:21.371 "name": "BaseBdev2", 00:15:21.371 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:21.371 "is_configured": true, 00:15:21.371 "data_offset": 2048, 00:15:21.371 "data_size": 63488 00:15:21.371 } 00:15:21.371 ] 00:15:21.371 }' 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.371 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.371 [2024-10-01 13:49:31.519725] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.631 [2024-10-01 13:49:31.573827] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:21.631 [2024-10-01 13:49:31.573921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.631 [2024-10-01 13:49:31.573945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.631 [2024-10-01 13:49:31.573955] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.631 "name": "raid_bdev1", 00:15:21.631 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:21.631 "strip_size_kb": 0, 00:15:21.631 "state": "online", 00:15:21.631 "raid_level": "raid1", 00:15:21.631 "superblock": true, 00:15:21.631 "num_base_bdevs": 2, 00:15:21.631 "num_base_bdevs_discovered": 1, 00:15:21.631 "num_base_bdevs_operational": 1, 00:15:21.631 "base_bdevs_list": [ 00:15:21.631 { 00:15:21.631 "name": null, 00:15:21.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.631 "is_configured": false, 00:15:21.631 "data_offset": 0, 00:15:21.631 "data_size": 63488 00:15:21.631 }, 00:15:21.631 { 00:15:21.631 "name": "BaseBdev2", 00:15:21.631 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:21.631 "is_configured": true, 00:15:21.631 "data_offset": 2048, 00:15:21.631 "data_size": 63488 00:15:21.631 } 00:15:21.631 ] 00:15:21.631 }' 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.631 13:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.890 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.149 "name": "raid_bdev1", 00:15:22.149 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:22.149 "strip_size_kb": 0, 00:15:22.149 "state": "online", 00:15:22.149 "raid_level": "raid1", 00:15:22.149 "superblock": true, 00:15:22.149 "num_base_bdevs": 2, 00:15:22.149 "num_base_bdevs_discovered": 1, 00:15:22.149 "num_base_bdevs_operational": 1, 00:15:22.149 "base_bdevs_list": [ 00:15:22.149 { 00:15:22.149 "name": null, 00:15:22.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.149 "is_configured": false, 00:15:22.149 "data_offset": 0, 00:15:22.149 "data_size": 63488 00:15:22.149 }, 00:15:22.149 { 00:15:22.149 "name": "BaseBdev2", 00:15:22.149 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:22.149 "is_configured": true, 00:15:22.149 "data_offset": 2048, 00:15:22.149 "data_size": 63488 00:15:22.149 } 00:15:22.149 ] 00:15:22.149 }' 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.149 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.150 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.150 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:22.150 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.150 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.150 [2024-10-01 13:49:32.187604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:22.150 [2024-10-01 13:49:32.187691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.150 [2024-10-01 13:49:32.187719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:22.150 [2024-10-01 13:49:32.187732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.150 [2024-10-01 13:49:32.188240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.150 [2024-10-01 13:49:32.188260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.150 [2024-10-01 13:49:32.188357] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:22.150 [2024-10-01 13:49:32.188373] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:22.150 [2024-10-01 13:49:32.188390] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:22.150 [2024-10-01 13:49:32.188429] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:22.150 BaseBdev1 00:15:22.150 13:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.150 13:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:23.086 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.087 "name": "raid_bdev1", 00:15:23.087 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:23.087 "strip_size_kb": 0, 00:15:23.087 "state": "online", 00:15:23.087 "raid_level": "raid1", 00:15:23.087 "superblock": true, 00:15:23.087 "num_base_bdevs": 2, 00:15:23.087 "num_base_bdevs_discovered": 1, 00:15:23.087 "num_base_bdevs_operational": 1, 00:15:23.087 "base_bdevs_list": [ 00:15:23.087 { 00:15:23.087 "name": null, 00:15:23.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.087 "is_configured": false, 00:15:23.087 "data_offset": 0, 00:15:23.087 "data_size": 63488 00:15:23.087 }, 00:15:23.087 { 00:15:23.087 "name": "BaseBdev2", 00:15:23.087 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:23.087 "is_configured": true, 00:15:23.087 "data_offset": 2048, 00:15:23.087 "data_size": 63488 00:15:23.087 } 00:15:23.087 ] 00:15:23.087 }' 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.087 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.673 "name": "raid_bdev1", 00:15:23.673 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:23.673 "strip_size_kb": 0, 00:15:23.673 "state": "online", 00:15:23.673 "raid_level": "raid1", 00:15:23.673 "superblock": true, 00:15:23.673 "num_base_bdevs": 2, 00:15:23.673 "num_base_bdevs_discovered": 1, 00:15:23.673 "num_base_bdevs_operational": 1, 00:15:23.673 "base_bdevs_list": [ 00:15:23.673 { 00:15:23.673 "name": null, 00:15:23.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.673 "is_configured": false, 00:15:23.673 "data_offset": 0, 00:15:23.673 "data_size": 63488 00:15:23.673 }, 00:15:23.673 { 00:15:23.673 "name": "BaseBdev2", 00:15:23.673 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:23.673 "is_configured": true, 00:15:23.673 "data_offset": 2048, 00:15:23.673 "data_size": 63488 00:15:23.673 } 00:15:23.673 ] 00:15:23.673 }' 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.673 [2024-10-01 13:49:33.791701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.673 [2024-10-01 13:49:33.791884] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:23.673 [2024-10-01 13:49:33.791903] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:23.673 request: 00:15:23.673 { 00:15:23.673 "base_bdev": "BaseBdev1", 00:15:23.673 "raid_bdev": "raid_bdev1", 00:15:23.673 "method": "bdev_raid_add_base_bdev", 00:15:23.673 "req_id": 1 00:15:23.673 } 00:15:23.673 Got JSON-RPC error response 00:15:23.673 response: 00:15:23.673 { 00:15:23.673 "code": -22, 00:15:23.673 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:23.673 } 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:23.673 13:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.051 "name": "raid_bdev1", 00:15:25.051 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:25.051 "strip_size_kb": 0, 00:15:25.051 "state": "online", 00:15:25.051 "raid_level": "raid1", 00:15:25.051 "superblock": true, 00:15:25.051 "num_base_bdevs": 2, 00:15:25.051 "num_base_bdevs_discovered": 1, 00:15:25.051 "num_base_bdevs_operational": 1, 00:15:25.051 "base_bdevs_list": [ 00:15:25.051 { 00:15:25.051 "name": null, 00:15:25.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.051 "is_configured": false, 00:15:25.051 "data_offset": 0, 00:15:25.051 "data_size": 63488 00:15:25.051 }, 00:15:25.051 { 00:15:25.051 "name": "BaseBdev2", 00:15:25.051 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:25.051 "is_configured": true, 00:15:25.051 "data_offset": 2048, 00:15:25.051 "data_size": 63488 00:15:25.051 } 00:15:25.051 ] 00:15:25.051 }' 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.051 13:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.310 "name": "raid_bdev1", 00:15:25.310 "uuid": "9f40d3a4-a5df-4a79-8a59-c2050121b3bc", 00:15:25.310 "strip_size_kb": 0, 00:15:25.310 "state": "online", 00:15:25.310 "raid_level": "raid1", 00:15:25.310 "superblock": true, 00:15:25.310 "num_base_bdevs": 2, 00:15:25.310 "num_base_bdevs_discovered": 1, 00:15:25.310 "num_base_bdevs_operational": 1, 00:15:25.310 "base_bdevs_list": [ 00:15:25.310 { 00:15:25.310 "name": null, 00:15:25.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.310 "is_configured": false, 00:15:25.310 "data_offset": 0, 00:15:25.310 "data_size": 63488 00:15:25.310 }, 00:15:25.310 { 00:15:25.310 "name": "BaseBdev2", 00:15:25.310 "uuid": "f2357de1-0488-56b7-8a4b-9ea6868a6341", 00:15:25.310 "is_configured": true, 00:15:25.310 "data_offset": 2048, 00:15:25.310 "data_size": 63488 00:15:25.310 } 00:15:25.310 ] 00:15:25.310 }' 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.310 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75703 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75703 ']' 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75703 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75703 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.311 killing process with pid 75703 00:15:25.311 Received shutdown signal, test time was about 60.000000 seconds 00:15:25.311 00:15:25.311 Latency(us) 00:15:25.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.311 =================================================================================================================== 00:15:25.311 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75703' 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75703 00:15:25.311 13:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75703 00:15:25.311 [2024-10-01 13:49:35.465819] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.311 [2024-10-01 13:49:35.466040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.311 [2024-10-01 13:49:35.466115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.311 [2024-10-01 13:49:35.466133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:25.880 [2024-10-01 13:49:35.807850] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:27.262 00:15:27.262 real 0m25.214s 00:15:27.262 user 0m29.751s 00:15:27.262 sys 0m4.915s 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.262 ************************************ 00:15:27.262 END TEST raid_rebuild_test_sb 00:15:27.262 ************************************ 00:15:27.262 13:49:37 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:27.262 13:49:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:27.262 13:49:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.262 13:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.262 ************************************ 00:15:27.262 START TEST raid_rebuild_test_io 00:15:27.262 ************************************ 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76452 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76452 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76452 ']' 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.262 13:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.262 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:27.262 Zero copy mechanism will not be used. 00:15:27.262 [2024-10-01 13:49:37.445859] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:27.262 [2024-10-01 13:49:37.446034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76452 ] 00:15:27.520 [2024-10-01 13:49:37.626721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.779 [2024-10-01 13:49:37.902931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.039 [2024-10-01 13:49:38.157743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.039 [2024-10-01 13:49:38.157815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.297 BaseBdev1_malloc 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.297 [2024-10-01 13:49:38.386888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:28.297 [2024-10-01 13:49:38.386994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.297 [2024-10-01 13:49:38.387029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:28.297 [2024-10-01 13:49:38.387054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.297 [2024-10-01 13:49:38.389940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.297 [2024-10-01 13:49:38.389994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.297 BaseBdev1 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.297 BaseBdev2_malloc 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.297 [2024-10-01 13:49:38.473765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:28.297 [2024-10-01 13:49:38.474053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.297 [2024-10-01 13:49:38.474123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:28.297 [2024-10-01 13:49:38.474252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.297 [2024-10-01 13:49:38.477111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.297 [2024-10-01 13:49:38.477283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:28.297 BaseBdev2 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.297 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:28.298 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.298 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.557 spare_malloc 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.557 spare_delay 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.557 [2024-10-01 13:49:38.550609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:28.557 [2024-10-01 13:49:38.550925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.557 [2024-10-01 13:49:38.550993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:28.557 [2024-10-01 13:49:38.551086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.557 [2024-10-01 13:49:38.554015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.557 [2024-10-01 13:49:38.554190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:28.557 spare 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.557 [2024-10-01 13:49:38.566653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.557 [2024-10-01 13:49:38.569194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.557 [2024-10-01 13:49:38.569446] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:28.557 [2024-10-01 13:49:38.569512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:28.557 [2024-10-01 13:49:38.569972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:28.557 [2024-10-01 13:49:38.570266] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:28.557 [2024-10-01 13:49:38.570366] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:28.557 [2024-10-01 13:49:38.570608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.557 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.557 "name": "raid_bdev1", 00:15:28.557 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:28.557 "strip_size_kb": 0, 00:15:28.557 "state": "online", 00:15:28.557 "raid_level": "raid1", 00:15:28.557 "superblock": false, 00:15:28.557 "num_base_bdevs": 2, 00:15:28.557 "num_base_bdevs_discovered": 2, 00:15:28.557 "num_base_bdevs_operational": 2, 00:15:28.558 "base_bdevs_list": [ 00:15:28.558 { 00:15:28.558 "name": "BaseBdev1", 00:15:28.558 "uuid": "b114e14e-1a2f-5adf-93fe-43a0d7a8f212", 00:15:28.558 "is_configured": true, 00:15:28.558 "data_offset": 0, 00:15:28.558 "data_size": 65536 00:15:28.558 }, 00:15:28.558 { 00:15:28.558 "name": "BaseBdev2", 00:15:28.558 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:28.558 "is_configured": true, 00:15:28.558 "data_offset": 0, 00:15:28.558 "data_size": 65536 00:15:28.558 } 00:15:28.558 ] 00:15:28.558 }' 00:15:28.558 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.558 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.817 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.817 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.817 13:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:28.817 13:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.817 [2024-10-01 13:49:38.982599] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:29.077 [2024-10-01 13:49:39.078134] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.077 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.077 "name": "raid_bdev1", 00:15:29.077 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:29.077 "strip_size_kb": 0, 00:15:29.077 "state": "online", 00:15:29.077 "raid_level": "raid1", 00:15:29.077 "superblock": false, 00:15:29.077 "num_base_bdevs": 2, 00:15:29.077 "num_base_bdevs_discovered": 1, 00:15:29.078 "num_base_bdevs_operational": 1, 00:15:29.078 "base_bdevs_list": [ 00:15:29.078 { 00:15:29.078 "name": null, 00:15:29.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.078 "is_configured": false, 00:15:29.078 "data_offset": 0, 00:15:29.078 "data_size": 65536 00:15:29.078 }, 00:15:29.078 { 00:15:29.078 "name": "BaseBdev2", 00:15:29.078 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:29.078 "is_configured": true, 00:15:29.078 "data_offset": 0, 00:15:29.078 "data_size": 65536 00:15:29.078 } 00:15:29.078 ] 00:15:29.078 }' 00:15:29.078 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.078 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.078 [2024-10-01 13:49:39.198879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:29.078 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:29.078 Zero copy mechanism will not be used. 00:15:29.078 Running I/O for 60 seconds... 00:15:29.645 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.645 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.645 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.645 [2024-10-01 13:49:39.550093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.645 13:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.645 13:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:29.645 [2024-10-01 13:49:39.613125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:29.645 [2024-10-01 13:49:39.615526] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:29.645 [2024-10-01 13:49:39.724330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:29.645 [2024-10-01 13:49:39.724934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:29.904 [2024-10-01 13:49:39.841152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:29.904 [2024-10-01 13:49:39.841518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:30.163 [2024-10-01 13:49:40.194696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:30.163 [2024-10-01 13:49:40.195244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:30.423 163.00 IOPS, 489.00 MiB/s [2024-10-01 13:49:40.403983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:30.423 [2024-10-01 13:49:40.404319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.423 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.682 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.682 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.682 "name": "raid_bdev1", 00:15:30.682 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:30.682 "strip_size_kb": 0, 00:15:30.682 "state": "online", 00:15:30.682 "raid_level": "raid1", 00:15:30.682 "superblock": false, 00:15:30.682 "num_base_bdevs": 2, 00:15:30.682 "num_base_bdevs_discovered": 2, 00:15:30.682 "num_base_bdevs_operational": 2, 00:15:30.682 "process": { 00:15:30.682 "type": "rebuild", 00:15:30.683 "target": "spare", 00:15:30.683 "progress": { 00:15:30.683 "blocks": 10240, 00:15:30.683 "percent": 15 00:15:30.683 } 00:15:30.683 }, 00:15:30.683 "base_bdevs_list": [ 00:15:30.683 { 00:15:30.683 "name": "spare", 00:15:30.683 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:30.683 "is_configured": true, 00:15:30.683 "data_offset": 0, 00:15:30.683 "data_size": 65536 00:15:30.683 }, 00:15:30.683 { 00:15:30.683 "name": "BaseBdev2", 00:15:30.683 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:30.683 "is_configured": true, 00:15:30.683 "data_offset": 0, 00:15:30.683 "data_size": 65536 00:15:30.683 } 00:15:30.683 ] 00:15:30.683 }' 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.683 [2024-10-01 13:49:40.741057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.683 [2024-10-01 13:49:40.751074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:30.683 [2024-10-01 13:49:40.773745] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.683 [2024-10-01 13:49:40.776728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.683 [2024-10-01 13:49:40.776777] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.683 [2024-10-01 13:49:40.776793] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.683 [2024-10-01 13:49:40.806622] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.683 "name": "raid_bdev1", 00:15:30.683 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:30.683 "strip_size_kb": 0, 00:15:30.683 "state": "online", 00:15:30.683 "raid_level": "raid1", 00:15:30.683 "superblock": false, 00:15:30.683 "num_base_bdevs": 2, 00:15:30.683 "num_base_bdevs_discovered": 1, 00:15:30.683 "num_base_bdevs_operational": 1, 00:15:30.683 "base_bdevs_list": [ 00:15:30.683 { 00:15:30.683 "name": null, 00:15:30.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.683 "is_configured": false, 00:15:30.683 "data_offset": 0, 00:15:30.683 "data_size": 65536 00:15:30.683 }, 00:15:30.683 { 00:15:30.683 "name": "BaseBdev2", 00:15:30.683 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:30.683 "is_configured": true, 00:15:30.683 "data_offset": 0, 00:15:30.683 "data_size": 65536 00:15:30.683 } 00:15:30.683 ] 00:15:30.683 }' 00:15:30.683 13:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.943 13:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.202 165.50 IOPS, 496.50 MiB/s 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.202 "name": "raid_bdev1", 00:15:31.202 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:31.202 "strip_size_kb": 0, 00:15:31.202 "state": "online", 00:15:31.202 "raid_level": "raid1", 00:15:31.202 "superblock": false, 00:15:31.202 "num_base_bdevs": 2, 00:15:31.202 "num_base_bdevs_discovered": 1, 00:15:31.202 "num_base_bdevs_operational": 1, 00:15:31.202 "base_bdevs_list": [ 00:15:31.202 { 00:15:31.202 "name": null, 00:15:31.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.202 "is_configured": false, 00:15:31.202 "data_offset": 0, 00:15:31.202 "data_size": 65536 00:15:31.202 }, 00:15:31.202 { 00:15:31.202 "name": "BaseBdev2", 00:15:31.202 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:31.202 "is_configured": true, 00:15:31.202 "data_offset": 0, 00:15:31.202 "data_size": 65536 00:15:31.202 } 00:15:31.202 ] 00:15:31.202 }' 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.202 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.461 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.461 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.461 13:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.461 13:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.461 [2024-10-01 13:49:41.441351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.461 13:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.461 13:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:31.461 [2024-10-01 13:49:41.512643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:31.461 [2024-10-01 13:49:41.514971] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.461 [2024-10-01 13:49:41.625104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:31.461 [2024-10-01 13:49:41.625947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:31.720 [2024-10-01 13:49:41.848515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:31.720 [2024-10-01 13:49:41.849040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:32.288 [2024-10-01 13:49:42.193243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:32.288 [2024-10-01 13:49:42.194099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:32.288 151.00 IOPS, 453.00 MiB/s [2024-10-01 13:49:42.410215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.548 "name": "raid_bdev1", 00:15:32.548 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:32.548 "strip_size_kb": 0, 00:15:32.548 "state": "online", 00:15:32.548 "raid_level": "raid1", 00:15:32.548 "superblock": false, 00:15:32.548 "num_base_bdevs": 2, 00:15:32.548 "num_base_bdevs_discovered": 2, 00:15:32.548 "num_base_bdevs_operational": 2, 00:15:32.548 "process": { 00:15:32.548 "type": "rebuild", 00:15:32.548 "target": "spare", 00:15:32.548 "progress": { 00:15:32.548 "blocks": 10240, 00:15:32.548 "percent": 15 00:15:32.548 } 00:15:32.548 }, 00:15:32.548 "base_bdevs_list": [ 00:15:32.548 { 00:15:32.548 "name": "spare", 00:15:32.548 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:32.548 "is_configured": true, 00:15:32.548 "data_offset": 0, 00:15:32.548 "data_size": 65536 00:15:32.548 }, 00:15:32.548 { 00:15:32.548 "name": "BaseBdev2", 00:15:32.548 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:32.548 "is_configured": true, 00:15:32.548 "data_offset": 0, 00:15:32.548 "data_size": 65536 00:15:32.548 } 00:15:32.548 ] 00:15:32.548 }' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=417 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.548 "name": "raid_bdev1", 00:15:32.548 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:32.548 "strip_size_kb": 0, 00:15:32.548 "state": "online", 00:15:32.548 "raid_level": "raid1", 00:15:32.548 "superblock": false, 00:15:32.548 "num_base_bdevs": 2, 00:15:32.548 "num_base_bdevs_discovered": 2, 00:15:32.548 "num_base_bdevs_operational": 2, 00:15:32.548 "process": { 00:15:32.548 "type": "rebuild", 00:15:32.548 "target": "spare", 00:15:32.548 "progress": { 00:15:32.548 "blocks": 12288, 00:15:32.548 "percent": 18 00:15:32.548 } 00:15:32.548 }, 00:15:32.548 "base_bdevs_list": [ 00:15:32.548 { 00:15:32.548 "name": "spare", 00:15:32.548 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:32.548 "is_configured": true, 00:15:32.548 "data_offset": 0, 00:15:32.548 "data_size": 65536 00:15:32.548 }, 00:15:32.548 { 00:15:32.548 "name": "BaseBdev2", 00:15:32.548 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:32.548 "is_configured": true, 00:15:32.548 "data_offset": 0, 00:15:32.548 "data_size": 65536 00:15:32.548 } 00:15:32.548 ] 00:15:32.548 }' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.548 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.807 [2024-10-01 13:49:42.754084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:32.807 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.807 13:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.807 [2024-10-01 13:49:42.975278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:33.327 134.75 IOPS, 404.25 MiB/s [2024-10-01 13:49:43.307753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:33.327 [2024-10-01 13:49:43.308591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:33.586 [2024-10-01 13:49:43.530826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:33.586 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.586 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.586 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.586 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.586 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.586 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.862 "name": "raid_bdev1", 00:15:33.862 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:33.862 "strip_size_kb": 0, 00:15:33.862 "state": "online", 00:15:33.862 "raid_level": "raid1", 00:15:33.862 "superblock": false, 00:15:33.862 "num_base_bdevs": 2, 00:15:33.862 "num_base_bdevs_discovered": 2, 00:15:33.862 "num_base_bdevs_operational": 2, 00:15:33.862 "process": { 00:15:33.862 "type": "rebuild", 00:15:33.862 "target": "spare", 00:15:33.862 "progress": { 00:15:33.862 "blocks": 24576, 00:15:33.862 "percent": 37 00:15:33.862 } 00:15:33.862 }, 00:15:33.862 "base_bdevs_list": [ 00:15:33.862 { 00:15:33.862 "name": "spare", 00:15:33.862 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:33.862 "is_configured": true, 00:15:33.862 "data_offset": 0, 00:15:33.862 "data_size": 65536 00:15:33.862 }, 00:15:33.862 { 00:15:33.862 "name": "BaseBdev2", 00:15:33.862 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:33.862 "is_configured": true, 00:15:33.862 "data_offset": 0, 00:15:33.862 "data_size": 65536 00:15:33.862 } 00:15:33.862 ] 00:15:33.862 }' 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.862 [2024-10-01 13:49:43.855531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.862 13:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.142 [2024-10-01 13:49:44.192714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:34.401 121.80 IOPS, 365.40 MiB/s [2024-10-01 13:49:44.496154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:34.659 [2024-10-01 13:49:44.604648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.916 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.916 "name": "raid_bdev1", 00:15:34.916 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:34.916 "strip_size_kb": 0, 00:15:34.916 "state": "online", 00:15:34.916 "raid_level": "raid1", 00:15:34.916 "superblock": false, 00:15:34.916 "num_base_bdevs": 2, 00:15:34.916 "num_base_bdevs_discovered": 2, 00:15:34.916 "num_base_bdevs_operational": 2, 00:15:34.916 "process": { 00:15:34.916 "type": "rebuild", 00:15:34.916 "target": "spare", 00:15:34.916 "progress": { 00:15:34.916 "blocks": 45056, 00:15:34.916 "percent": 68 00:15:34.916 } 00:15:34.916 }, 00:15:34.916 "base_bdevs_list": [ 00:15:34.916 { 00:15:34.916 "name": "spare", 00:15:34.916 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:34.916 "is_configured": true, 00:15:34.916 "data_offset": 0, 00:15:34.916 "data_size": 65536 00:15:34.916 }, 00:15:34.916 { 00:15:34.916 "name": "BaseBdev2", 00:15:34.916 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:34.916 "is_configured": true, 00:15:34.916 "data_offset": 0, 00:15:34.916 "data_size": 65536 00:15:34.916 } 00:15:34.916 ] 00:15:34.917 }' 00:15:34.917 13:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.917 13:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.917 13:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.917 [2024-10-01 13:49:45.034288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:34.917 13:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.917 13:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.174 106.67 IOPS, 320.00 MiB/s [2024-10-01 13:49:45.249221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:35.432 [2024-10-01 13:49:45.370228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.998 "name": "raid_bdev1", 00:15:35.998 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:35.998 "strip_size_kb": 0, 00:15:35.998 "state": "online", 00:15:35.998 "raid_level": "raid1", 00:15:35.998 "superblock": false, 00:15:35.998 "num_base_bdevs": 2, 00:15:35.998 "num_base_bdevs_discovered": 2, 00:15:35.998 "num_base_bdevs_operational": 2, 00:15:35.998 "process": { 00:15:35.998 "type": "rebuild", 00:15:35.998 "target": "spare", 00:15:35.998 "progress": { 00:15:35.998 "blocks": 63488, 00:15:35.998 "percent": 96 00:15:35.998 } 00:15:35.998 }, 00:15:35.998 "base_bdevs_list": [ 00:15:35.998 { 00:15:35.998 "name": "spare", 00:15:35.998 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:35.998 "is_configured": true, 00:15:35.998 "data_offset": 0, 00:15:35.998 "data_size": 65536 00:15:35.998 }, 00:15:35.998 { 00:15:35.998 "name": "BaseBdev2", 00:15:35.998 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:35.998 "is_configured": true, 00:15:35.998 "data_offset": 0, 00:15:35.998 "data_size": 65536 00:15:35.998 } 00:15:35.998 ] 00:15:35.998 }' 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.998 [2024-10-01 13:49:46.148332] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.998 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.256 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.256 13:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.256 97.14 IOPS, 291.43 MiB/s [2024-10-01 13:49:46.248217] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:36.256 [2024-10-01 13:49:46.250381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.193 88.75 IOPS, 266.25 MiB/s 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.193 "name": "raid_bdev1", 00:15:37.193 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:37.193 "strip_size_kb": 0, 00:15:37.193 "state": "online", 00:15:37.193 "raid_level": "raid1", 00:15:37.193 "superblock": false, 00:15:37.193 "num_base_bdevs": 2, 00:15:37.193 "num_base_bdevs_discovered": 2, 00:15:37.193 "num_base_bdevs_operational": 2, 00:15:37.193 "base_bdevs_list": [ 00:15:37.193 { 00:15:37.193 "name": "spare", 00:15:37.193 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:37.193 "is_configured": true, 00:15:37.193 "data_offset": 0, 00:15:37.193 "data_size": 65536 00:15:37.193 }, 00:15:37.193 { 00:15:37.193 "name": "BaseBdev2", 00:15:37.193 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:37.193 "is_configured": true, 00:15:37.193 "data_offset": 0, 00:15:37.193 "data_size": 65536 00:15:37.193 } 00:15:37.193 ] 00:15:37.193 }' 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.193 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.453 "name": "raid_bdev1", 00:15:37.453 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:37.453 "strip_size_kb": 0, 00:15:37.453 "state": "online", 00:15:37.453 "raid_level": "raid1", 00:15:37.453 "superblock": false, 00:15:37.453 "num_base_bdevs": 2, 00:15:37.453 "num_base_bdevs_discovered": 2, 00:15:37.453 "num_base_bdevs_operational": 2, 00:15:37.453 "base_bdevs_list": [ 00:15:37.453 { 00:15:37.453 "name": "spare", 00:15:37.453 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:37.453 "is_configured": true, 00:15:37.453 "data_offset": 0, 00:15:37.453 "data_size": 65536 00:15:37.453 }, 00:15:37.453 { 00:15:37.453 "name": "BaseBdev2", 00:15:37.453 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:37.453 "is_configured": true, 00:15:37.453 "data_offset": 0, 00:15:37.453 "data_size": 65536 00:15:37.453 } 00:15:37.453 ] 00:15:37.453 }' 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.453 "name": "raid_bdev1", 00:15:37.453 "uuid": "2f166803-abe1-4d99-9f3c-2a09eef5b588", 00:15:37.453 "strip_size_kb": 0, 00:15:37.453 "state": "online", 00:15:37.453 "raid_level": "raid1", 00:15:37.453 "superblock": false, 00:15:37.453 "num_base_bdevs": 2, 00:15:37.453 "num_base_bdevs_discovered": 2, 00:15:37.453 "num_base_bdevs_operational": 2, 00:15:37.453 "base_bdevs_list": [ 00:15:37.453 { 00:15:37.453 "name": "spare", 00:15:37.453 "uuid": "ecbbef0e-5b17-5178-ae3b-6934431b74c0", 00:15:37.453 "is_configured": true, 00:15:37.453 "data_offset": 0, 00:15:37.453 "data_size": 65536 00:15:37.453 }, 00:15:37.453 { 00:15:37.453 "name": "BaseBdev2", 00:15:37.453 "uuid": "5ae7975d-d08f-5eb1-9a62-34f76e348ebe", 00:15:37.453 "is_configured": true, 00:15:37.453 "data_offset": 0, 00:15:37.453 "data_size": 65536 00:15:37.453 } 00:15:37.453 ] 00:15:37.453 }' 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.453 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.712 13:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.712 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.712 13:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.712 [2024-10-01 13:49:47.889306] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.712 [2024-10-01 13:49:47.889340] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.972 00:15:37.972 Latency(us) 00:15:37.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.972 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:37.972 raid_bdev1 : 8.79 83.54 250.63 0.00 0.00 16755.81 302.68 115385.47 00:15:37.972 =================================================================================================================== 00:15:37.972 Total : 83.54 250.63 0.00 0.00 16755.81 302.68 115385.47 00:15:37.972 [2024-10-01 13:49:47.997140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.972 [2024-10-01 13:49:47.997202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.972 [2024-10-01 13:49:47.997295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.972 [2024-10-01 13:49:47.997308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:37.972 { 00:15:37.972 "results": [ 00:15:37.972 { 00:15:37.972 "job": "raid_bdev1", 00:15:37.972 "core_mask": "0x1", 00:15:37.972 "workload": "randrw", 00:15:37.972 "percentage": 50, 00:15:37.972 "status": "finished", 00:15:37.972 "queue_depth": 2, 00:15:37.972 "io_size": 3145728, 00:15:37.972 "runtime": 8.785747, 00:15:37.972 "iops": 83.5444043631122, 00:15:37.972 "mibps": 250.6332130893366, 00:15:37.972 "io_failed": 0, 00:15:37.972 "io_timeout": 0, 00:15:37.972 "avg_latency_us": 16755.80580195441, 00:15:37.972 "min_latency_us": 302.67630522088353, 00:15:37.972 "max_latency_us": 115385.47148594378 00:15:37.972 } 00:15:37.972 ], 00:15:37.972 "core_count": 1 00:15:37.972 } 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.972 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:38.231 /dev/nbd0 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.231 1+0 records in 00:15:38.231 1+0 records out 00:15:38.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340853 s, 12.0 MB/s 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.231 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:38.490 /dev/nbd1 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.490 1+0 records in 00:15:38.490 1+0 records out 00:15:38.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466065 s, 8.8 MB/s 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.490 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:38.749 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:38.749 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.749 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:38.749 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.749 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:38.749 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.749 13:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.008 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76452 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76452 ']' 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76452 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76452 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.267 killing process with pid 76452 00:15:39.267 Received shutdown signal, test time was about 10.138216 seconds 00:15:39.267 00:15:39.267 Latency(us) 00:15:39.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.267 =================================================================================================================== 00:15:39.267 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76452' 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76452 00:15:39.267 [2024-10-01 13:49:49.323392] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.267 13:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76452 00:15:39.526 [2024-10-01 13:49:49.559584] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.902 13:49:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:40.902 00:15:40.902 real 0m13.714s 00:15:40.902 user 0m16.849s 00:15:40.902 sys 0m1.888s 00:15:40.902 13:49:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.902 13:49:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.902 ************************************ 00:15:40.902 END TEST raid_rebuild_test_io 00:15:40.902 ************************************ 00:15:41.161 13:49:51 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:41.161 13:49:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:41.161 13:49:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.161 13:49:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.161 ************************************ 00:15:41.161 START TEST raid_rebuild_test_sb_io 00:15:41.161 ************************************ 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.161 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76854 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76854 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76854 ']' 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.162 13:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.162 [2024-10-01 13:49:51.222272] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:41.162 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:41.162 Zero copy mechanism will not be used. 00:15:41.162 [2024-10-01 13:49:51.222613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76854 ] 00:15:41.422 [2024-10-01 13:49:51.395785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.422 [2024-10-01 13:49:51.612230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.681 [2024-10-01 13:49:51.822713] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.681 [2024-10-01 13:49:51.822763] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.941 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.941 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:15:41.941 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.941 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.941 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.941 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 BaseBdev1_malloc 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 [2024-10-01 13:49:52.161787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:42.201 [2024-10-01 13:49:52.161868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.201 [2024-10-01 13:49:52.161897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:42.201 [2024-10-01 13:49:52.161916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.201 [2024-10-01 13:49:52.164476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.201 [2024-10-01 13:49:52.164520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:42.201 BaseBdev1 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 BaseBdev2_malloc 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 [2024-10-01 13:49:52.229373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:42.201 [2024-10-01 13:49:52.229461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.201 [2024-10-01 13:49:52.229484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:42.201 [2024-10-01 13:49:52.229498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.201 [2024-10-01 13:49:52.231921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.201 [2024-10-01 13:49:52.231968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:42.201 BaseBdev2 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 spare_malloc 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 spare_delay 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 [2024-10-01 13:49:52.291948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:42.201 [2024-10-01 13:49:52.292020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.201 [2024-10-01 13:49:52.292046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:42.201 [2024-10-01 13:49:52.292062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.201 [2024-10-01 13:49:52.294606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.201 [2024-10-01 13:49:52.294787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:42.201 spare 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 [2024-10-01 13:49:52.299992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.201 [2024-10-01 13:49:52.302076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.201 [2024-10-01 13:49:52.302255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:42.201 [2024-10-01 13:49:52.302272] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:42.201 [2024-10-01 13:49:52.302595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:42.201 [2024-10-01 13:49:52.302767] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:42.201 [2024-10-01 13:49:52.302777] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:42.201 [2024-10-01 13:49:52.302956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.201 "name": "raid_bdev1", 00:15:42.201 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:42.201 "strip_size_kb": 0, 00:15:42.201 "state": "online", 00:15:42.201 "raid_level": "raid1", 00:15:42.201 "superblock": true, 00:15:42.201 "num_base_bdevs": 2, 00:15:42.201 "num_base_bdevs_discovered": 2, 00:15:42.201 "num_base_bdevs_operational": 2, 00:15:42.201 "base_bdevs_list": [ 00:15:42.201 { 00:15:42.201 "name": "BaseBdev1", 00:15:42.201 "uuid": "152006fe-c096-5e03-b2d5-e4cf94aa4932", 00:15:42.201 "is_configured": true, 00:15:42.201 "data_offset": 2048, 00:15:42.201 "data_size": 63488 00:15:42.201 }, 00:15:42.201 { 00:15:42.201 "name": "BaseBdev2", 00:15:42.201 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:42.201 "is_configured": true, 00:15:42.201 "data_offset": 2048, 00:15:42.201 "data_size": 63488 00:15:42.201 } 00:15:42.201 ] 00:15:42.201 }' 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.201 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.774 [2024-10-01 13:49:52.767921] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.774 [2024-10-01 13:49:52.855617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.774 "name": "raid_bdev1", 00:15:42.774 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:42.774 "strip_size_kb": 0, 00:15:42.774 "state": "online", 00:15:42.774 "raid_level": "raid1", 00:15:42.774 "superblock": true, 00:15:42.774 "num_base_bdevs": 2, 00:15:42.774 "num_base_bdevs_discovered": 1, 00:15:42.774 "num_base_bdevs_operational": 1, 00:15:42.774 "base_bdevs_list": [ 00:15:42.774 { 00:15:42.774 "name": null, 00:15:42.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.774 "is_configured": false, 00:15:42.774 "data_offset": 0, 00:15:42.774 "data_size": 63488 00:15:42.774 }, 00:15:42.774 { 00:15:42.774 "name": "BaseBdev2", 00:15:42.774 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:42.774 "is_configured": true, 00:15:42.774 "data_offset": 2048, 00:15:42.774 "data_size": 63488 00:15:42.774 } 00:15:42.774 ] 00:15:42.774 }' 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.774 13:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.035 [2024-10-01 13:49:52.992638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:43.035 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:43.035 Zero copy mechanism will not be used. 00:15:43.035 Running I/O for 60 seconds... 00:15:43.295 13:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.296 13:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.296 13:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.296 [2024-10-01 13:49:53.292115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.296 13:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.296 13:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:43.296 [2024-10-01 13:49:53.351563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:43.296 [2024-10-01 13:49:53.353922] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.296 [2024-10-01 13:49:53.455755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.296 [2024-10-01 13:49:53.456622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.560 [2024-10-01 13:49:53.658653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.560 [2024-10-01 13:49:53.659610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.826 [2024-10-01 13:49:53.986260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:43.826 [2024-10-01 13:49:53.987558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:44.094 138.00 IOPS, 414.00 MiB/s [2024-10-01 13:49:54.198858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:44.094 [2024-10-01 13:49:54.199717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.363 "name": "raid_bdev1", 00:15:44.363 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:44.363 "strip_size_kb": 0, 00:15:44.363 "state": "online", 00:15:44.363 "raid_level": "raid1", 00:15:44.363 "superblock": true, 00:15:44.363 "num_base_bdevs": 2, 00:15:44.363 "num_base_bdevs_discovered": 2, 00:15:44.363 "num_base_bdevs_operational": 2, 00:15:44.363 "process": { 00:15:44.363 "type": "rebuild", 00:15:44.363 "target": "spare", 00:15:44.363 "progress": { 00:15:44.363 "blocks": 10240, 00:15:44.363 "percent": 16 00:15:44.363 } 00:15:44.363 }, 00:15:44.363 "base_bdevs_list": [ 00:15:44.363 { 00:15:44.363 "name": "spare", 00:15:44.363 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:44.363 "is_configured": true, 00:15:44.363 "data_offset": 2048, 00:15:44.363 "data_size": 63488 00:15:44.363 }, 00:15:44.363 { 00:15:44.363 "name": "BaseBdev2", 00:15:44.363 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:44.363 "is_configured": true, 00:15:44.363 "data_offset": 2048, 00:15:44.363 "data_size": 63488 00:15:44.363 } 00:15:44.363 ] 00:15:44.363 }' 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.363 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.363 [2024-10-01 13:49:54.451484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.363 [2024-10-01 13:49:54.518690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:44.363 [2024-10-01 13:49:54.519438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:44.625 [2024-10-01 13:49:54.620933] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:44.625 [2024-10-01 13:49:54.636558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.625 [2024-10-01 13:49:54.636868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.625 [2024-10-01 13:49:54.636908] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:44.625 [2024-10-01 13:49:54.679575] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.625 "name": "raid_bdev1", 00:15:44.625 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:44.625 "strip_size_kb": 0, 00:15:44.625 "state": "online", 00:15:44.625 "raid_level": "raid1", 00:15:44.625 "superblock": true, 00:15:44.625 "num_base_bdevs": 2, 00:15:44.625 "num_base_bdevs_discovered": 1, 00:15:44.625 "num_base_bdevs_operational": 1, 00:15:44.625 "base_bdevs_list": [ 00:15:44.625 { 00:15:44.625 "name": null, 00:15:44.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.625 "is_configured": false, 00:15:44.625 "data_offset": 0, 00:15:44.625 "data_size": 63488 00:15:44.625 }, 00:15:44.625 { 00:15:44.625 "name": "BaseBdev2", 00:15:44.625 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:44.625 "is_configured": true, 00:15:44.625 "data_offset": 2048, 00:15:44.625 "data_size": 63488 00:15:44.625 } 00:15:44.625 ] 00:15:44.625 }' 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.625 13:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.144 135.50 IOPS, 406.50 MiB/s 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.144 "name": "raid_bdev1", 00:15:45.144 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:45.144 "strip_size_kb": 0, 00:15:45.144 "state": "online", 00:15:45.144 "raid_level": "raid1", 00:15:45.144 "superblock": true, 00:15:45.144 "num_base_bdevs": 2, 00:15:45.144 "num_base_bdevs_discovered": 1, 00:15:45.144 "num_base_bdevs_operational": 1, 00:15:45.144 "base_bdevs_list": [ 00:15:45.144 { 00:15:45.144 "name": null, 00:15:45.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.144 "is_configured": false, 00:15:45.144 "data_offset": 0, 00:15:45.144 "data_size": 63488 00:15:45.144 }, 00:15:45.144 { 00:15:45.144 "name": "BaseBdev2", 00:15:45.144 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:45.144 "is_configured": true, 00:15:45.144 "data_offset": 2048, 00:15:45.144 "data_size": 63488 00:15:45.144 } 00:15:45.144 ] 00:15:45.144 }' 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.144 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.144 [2024-10-01 13:49:55.293210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.403 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.403 13:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:45.403 [2024-10-01 13:49:55.362347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:45.403 [2024-10-01 13:49:55.364769] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.403 [2024-10-01 13:49:55.480144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:45.403 [2024-10-01 13:49:55.481036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:45.662 [2024-10-01 13:49:55.604824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:45.662 [2024-10-01 13:49:55.605188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:45.662 [2024-10-01 13:49:55.844155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:45.921 141.33 IOPS, 424.00 MiB/s [2024-10-01 13:49:56.057921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:45.921 [2024-10-01 13:49:56.058255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:46.180 [2024-10-01 13:49:56.305533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.180 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.438 "name": "raid_bdev1", 00:15:46.438 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:46.438 "strip_size_kb": 0, 00:15:46.438 "state": "online", 00:15:46.438 "raid_level": "raid1", 00:15:46.438 "superblock": true, 00:15:46.438 "num_base_bdevs": 2, 00:15:46.438 "num_base_bdevs_discovered": 2, 00:15:46.438 "num_base_bdevs_operational": 2, 00:15:46.438 "process": { 00:15:46.438 "type": "rebuild", 00:15:46.438 "target": "spare", 00:15:46.438 "progress": { 00:15:46.438 "blocks": 14336, 00:15:46.438 "percent": 22 00:15:46.438 } 00:15:46.438 }, 00:15:46.438 "base_bdevs_list": [ 00:15:46.438 { 00:15:46.438 "name": "spare", 00:15:46.438 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:46.438 "is_configured": true, 00:15:46.438 "data_offset": 2048, 00:15:46.438 "data_size": 63488 00:15:46.438 }, 00:15:46.438 { 00:15:46.438 "name": "BaseBdev2", 00:15:46.438 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:46.438 "is_configured": true, 00:15:46.438 "data_offset": 2048, 00:15:46.438 "data_size": 63488 00:15:46.438 } 00:15:46.438 ] 00:15:46.438 }' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:46.438 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.438 [2024-10-01 13:49:56.514895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.438 "name": "raid_bdev1", 00:15:46.438 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:46.438 "strip_size_kb": 0, 00:15:46.438 "state": "online", 00:15:46.438 "raid_level": "raid1", 00:15:46.438 "superblock": true, 00:15:46.438 "num_base_bdevs": 2, 00:15:46.438 "num_base_bdevs_discovered": 2, 00:15:46.438 "num_base_bdevs_operational": 2, 00:15:46.438 "process": { 00:15:46.438 "type": "rebuild", 00:15:46.438 "target": "spare", 00:15:46.438 "progress": { 00:15:46.438 "blocks": 14336, 00:15:46.438 "percent": 22 00:15:46.438 } 00:15:46.438 }, 00:15:46.438 "base_bdevs_list": [ 00:15:46.438 { 00:15:46.438 "name": "spare", 00:15:46.438 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:46.438 "is_configured": true, 00:15:46.438 "data_offset": 2048, 00:15:46.438 "data_size": 63488 00:15:46.438 }, 00:15:46.438 { 00:15:46.438 "name": "BaseBdev2", 00:15:46.438 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:46.438 "is_configured": true, 00:15:46.438 "data_offset": 2048, 00:15:46.438 "data_size": 63488 00:15:46.438 } 00:15:46.438 ] 00:15:46.438 }' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.438 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.697 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.697 13:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.697 [2024-10-01 13:49:56.718393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:46.697 [2024-10-01 13:49:56.719177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:46.697 [2024-10-01 13:49:56.834436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:47.213 125.50 IOPS, 376.50 MiB/s [2024-10-01 13:49:57.188030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:47.471 [2024-10-01 13:49:57.537151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.471 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.729 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.729 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.729 "name": "raid_bdev1", 00:15:47.729 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:47.729 "strip_size_kb": 0, 00:15:47.729 "state": "online", 00:15:47.729 "raid_level": "raid1", 00:15:47.729 "superblock": true, 00:15:47.729 "num_base_bdevs": 2, 00:15:47.729 "num_base_bdevs_discovered": 2, 00:15:47.729 "num_base_bdevs_operational": 2, 00:15:47.729 "process": { 00:15:47.729 "type": "rebuild", 00:15:47.729 "target": "spare", 00:15:47.729 "progress": { 00:15:47.729 "blocks": 34816, 00:15:47.729 "percent": 54 00:15:47.729 } 00:15:47.729 }, 00:15:47.729 "base_bdevs_list": [ 00:15:47.729 { 00:15:47.729 "name": "spare", 00:15:47.729 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:47.729 "is_configured": true, 00:15:47.729 "data_offset": 2048, 00:15:47.729 "data_size": 63488 00:15:47.729 }, 00:15:47.729 { 00:15:47.729 "name": "BaseBdev2", 00:15:47.729 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:47.729 "is_configured": true, 00:15:47.729 "data_offset": 2048, 00:15:47.729 "data_size": 63488 00:15:47.729 } 00:15:47.729 ] 00:15:47.729 }' 00:15:47.729 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.729 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.729 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.730 [2024-10-01 13:49:57.777066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:47.730 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.730 13:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.926 112.20 IOPS, 336.60 MiB/s 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.926 "name": "raid_bdev1", 00:15:48.926 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:48.926 "strip_size_kb": 0, 00:15:48.926 "state": "online", 00:15:48.926 "raid_level": "raid1", 00:15:48.926 "superblock": true, 00:15:48.926 "num_base_bdevs": 2, 00:15:48.926 "num_base_bdevs_discovered": 2, 00:15:48.926 "num_base_bdevs_operational": 2, 00:15:48.926 "process": { 00:15:48.926 "type": "rebuild", 00:15:48.926 "target": "spare", 00:15:48.926 "progress": { 00:15:48.926 "blocks": 57344, 00:15:48.926 "percent": 90 00:15:48.926 } 00:15:48.926 }, 00:15:48.926 "base_bdevs_list": [ 00:15:48.926 { 00:15:48.926 "name": "spare", 00:15:48.926 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:48.926 "is_configured": true, 00:15:48.926 "data_offset": 2048, 00:15:48.926 "data_size": 63488 00:15:48.926 }, 00:15:48.926 { 00:15:48.926 "name": "BaseBdev2", 00:15:48.926 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:48.926 "is_configured": true, 00:15:48.926 "data_offset": 2048, 00:15:48.926 "data_size": 63488 00:15:48.926 } 00:15:48.926 ] 00:15:48.926 }' 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.926 13:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.926 97.83 IOPS, 293.50 MiB/s [2024-10-01 13:49:59.110853] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:49.184 [2024-10-01 13:49:59.216935] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:49.184 [2024-10-01 13:49:59.219701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.751 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.009 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.009 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.009 "name": "raid_bdev1", 00:15:50.009 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:50.009 "strip_size_kb": 0, 00:15:50.009 "state": "online", 00:15:50.009 "raid_level": "raid1", 00:15:50.009 "superblock": true, 00:15:50.009 "num_base_bdevs": 2, 00:15:50.009 "num_base_bdevs_discovered": 2, 00:15:50.009 "num_base_bdevs_operational": 2, 00:15:50.009 "base_bdevs_list": [ 00:15:50.009 { 00:15:50.009 "name": "spare", 00:15:50.009 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:50.009 "is_configured": true, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 }, 00:15:50.009 { 00:15:50.009 "name": "BaseBdev2", 00:15:50.009 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:50.009 "is_configured": true, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 } 00:15:50.009 ] 00:15:50.009 }' 00:15:50.009 13:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.009 88.29 IOPS, 264.86 MiB/s 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:50.009 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.009 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:50.009 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.010 "name": "raid_bdev1", 00:15:50.010 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:50.010 "strip_size_kb": 0, 00:15:50.010 "state": "online", 00:15:50.010 "raid_level": "raid1", 00:15:50.010 "superblock": true, 00:15:50.010 "num_base_bdevs": 2, 00:15:50.010 "num_base_bdevs_discovered": 2, 00:15:50.010 "num_base_bdevs_operational": 2, 00:15:50.010 "base_bdevs_list": [ 00:15:50.010 { 00:15:50.010 "name": "spare", 00:15:50.010 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:50.010 "is_configured": true, 00:15:50.010 "data_offset": 2048, 00:15:50.010 "data_size": 63488 00:15:50.010 }, 00:15:50.010 { 00:15:50.010 "name": "BaseBdev2", 00:15:50.010 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:50.010 "is_configured": true, 00:15:50.010 "data_offset": 2048, 00:15:50.010 "data_size": 63488 00:15:50.010 } 00:15:50.010 ] 00:15:50.010 }' 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.010 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.267 "name": "raid_bdev1", 00:15:50.267 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:50.267 "strip_size_kb": 0, 00:15:50.267 "state": "online", 00:15:50.267 "raid_level": "raid1", 00:15:50.267 "superblock": true, 00:15:50.267 "num_base_bdevs": 2, 00:15:50.267 "num_base_bdevs_discovered": 2, 00:15:50.267 "num_base_bdevs_operational": 2, 00:15:50.267 "base_bdevs_list": [ 00:15:50.267 { 00:15:50.267 "name": "spare", 00:15:50.267 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:50.267 "is_configured": true, 00:15:50.267 "data_offset": 2048, 00:15:50.267 "data_size": 63488 00:15:50.267 }, 00:15:50.267 { 00:15:50.267 "name": "BaseBdev2", 00:15:50.267 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:50.267 "is_configured": true, 00:15:50.267 "data_offset": 2048, 00:15:50.267 "data_size": 63488 00:15:50.267 } 00:15:50.267 ] 00:15:50.267 }' 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.267 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.524 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.524 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.524 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.524 [2024-10-01 13:50:00.642956] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.524 [2024-10-01 13:50:00.643000] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.783 00:15:50.783 Latency(us) 00:15:50.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.783 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:50.783 raid_bdev1 : 7.74 83.72 251.16 0.00 0.00 15702.09 315.84 113701.01 00:15:50.783 =================================================================================================================== 00:15:50.783 Total : 83.72 251.16 0.00 0.00 15702.09 315.84 113701.01 00:15:50.783 [2024-10-01 13:50:00.746620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.783 [2024-10-01 13:50:00.746691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.783 [2024-10-01 13:50:00.746786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.783 [2024-10-01 13:50:00.746804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.783 { 00:15:50.783 "results": [ 00:15:50.783 { 00:15:50.783 "job": "raid_bdev1", 00:15:50.783 "core_mask": "0x1", 00:15:50.783 "workload": "randrw", 00:15:50.783 "percentage": 50, 00:15:50.783 "status": "finished", 00:15:50.783 "queue_depth": 2, 00:15:50.783 "io_size": 3145728, 00:15:50.783 "runtime": 7.740211, 00:15:50.783 "iops": 83.71864798000985, 00:15:50.783 "mibps": 251.15594394002954, 00:15:50.783 "io_failed": 0, 00:15:50.783 "io_timeout": 0, 00:15:50.783 "avg_latency_us": 15702.093628836334, 00:15:50.783 "min_latency_us": 315.8361445783133, 00:15:50.783 "max_latency_us": 113701.01204819277 00:15:50.783 } 00:15:50.783 ], 00:15:50.783 "core_count": 1 00:15:50.783 } 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.783 13:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:51.041 /dev/nbd0 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.042 1+0 records in 00:15:51.042 1+0 records out 00:15:51.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437086 s, 9.4 MB/s 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.042 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:51.301 /dev/nbd1 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.301 1+0 records in 00:15:51.301 1+0 records out 00:15:51.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315602 s, 13.0 MB/s 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.301 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.559 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.560 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.560 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:51.560 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.560 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.817 13:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.817 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.817 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.817 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.817 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.817 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.075 [2024-10-01 13:50:02.033017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:52.075 [2024-10-01 13:50:02.033328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.075 [2024-10-01 13:50:02.033364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:52.075 [2024-10-01 13:50:02.033380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.075 [2024-10-01 13:50:02.036183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.075 [2024-10-01 13:50:02.036258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:52.075 [2024-10-01 13:50:02.036387] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:52.075 [2024-10-01 13:50:02.036610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.075 [2024-10-01 13:50:02.036862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.075 spare 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.075 [2024-10-01 13:50:02.137017] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:52.075 [2024-10-01 13:50:02.137076] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:52.075 [2024-10-01 13:50:02.137496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:52.075 [2024-10-01 13:50:02.137729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:52.075 [2024-10-01 13:50:02.137745] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:52.075 [2024-10-01 13:50:02.137969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.075 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.075 "name": "raid_bdev1", 00:15:52.075 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:52.075 "strip_size_kb": 0, 00:15:52.075 "state": "online", 00:15:52.075 "raid_level": "raid1", 00:15:52.075 "superblock": true, 00:15:52.075 "num_base_bdevs": 2, 00:15:52.075 "num_base_bdevs_discovered": 2, 00:15:52.075 "num_base_bdevs_operational": 2, 00:15:52.075 "base_bdevs_list": [ 00:15:52.075 { 00:15:52.075 "name": "spare", 00:15:52.075 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:52.075 "is_configured": true, 00:15:52.075 "data_offset": 2048, 00:15:52.076 "data_size": 63488 00:15:52.076 }, 00:15:52.076 { 00:15:52.076 "name": "BaseBdev2", 00:15:52.076 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:52.076 "is_configured": true, 00:15:52.076 "data_offset": 2048, 00:15:52.076 "data_size": 63488 00:15:52.076 } 00:15:52.076 ] 00:15:52.076 }' 00:15:52.076 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.076 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.642 "name": "raid_bdev1", 00:15:52.642 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:52.642 "strip_size_kb": 0, 00:15:52.642 "state": "online", 00:15:52.642 "raid_level": "raid1", 00:15:52.642 "superblock": true, 00:15:52.642 "num_base_bdevs": 2, 00:15:52.642 "num_base_bdevs_discovered": 2, 00:15:52.642 "num_base_bdevs_operational": 2, 00:15:52.642 "base_bdevs_list": [ 00:15:52.642 { 00:15:52.642 "name": "spare", 00:15:52.642 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:52.642 "is_configured": true, 00:15:52.642 "data_offset": 2048, 00:15:52.642 "data_size": 63488 00:15:52.642 }, 00:15:52.642 { 00:15:52.642 "name": "BaseBdev2", 00:15:52.642 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:52.642 "is_configured": true, 00:15:52.642 "data_offset": 2048, 00:15:52.642 "data_size": 63488 00:15:52.642 } 00:15:52.642 ] 00:15:52.642 }' 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.642 [2024-10-01 13:50:02.753171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.642 "name": "raid_bdev1", 00:15:52.642 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:52.642 "strip_size_kb": 0, 00:15:52.642 "state": "online", 00:15:52.642 "raid_level": "raid1", 00:15:52.642 "superblock": true, 00:15:52.642 "num_base_bdevs": 2, 00:15:52.642 "num_base_bdevs_discovered": 1, 00:15:52.642 "num_base_bdevs_operational": 1, 00:15:52.642 "base_bdevs_list": [ 00:15:52.642 { 00:15:52.642 "name": null, 00:15:52.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.642 "is_configured": false, 00:15:52.642 "data_offset": 0, 00:15:52.642 "data_size": 63488 00:15:52.642 }, 00:15:52.642 { 00:15:52.642 "name": "BaseBdev2", 00:15:52.642 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:52.642 "is_configured": true, 00:15:52.642 "data_offset": 2048, 00:15:52.642 "data_size": 63488 00:15:52.642 } 00:15:52.642 ] 00:15:52.642 }' 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.642 13:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.211 13:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.211 13:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.211 13:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.211 [2024-10-01 13:50:03.208635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.211 [2024-10-01 13:50:03.208932] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.211 [2024-10-01 13:50:03.208953] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:53.211 [2024-10-01 13:50:03.209033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.211 [2024-10-01 13:50:03.226636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:53.211 13:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.211 13:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:53.211 [2024-10-01 13:50:03.229077] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.144 "name": "raid_bdev1", 00:15:54.144 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:54.144 "strip_size_kb": 0, 00:15:54.144 "state": "online", 00:15:54.144 "raid_level": "raid1", 00:15:54.144 "superblock": true, 00:15:54.144 "num_base_bdevs": 2, 00:15:54.144 "num_base_bdevs_discovered": 2, 00:15:54.144 "num_base_bdevs_operational": 2, 00:15:54.144 "process": { 00:15:54.144 "type": "rebuild", 00:15:54.144 "target": "spare", 00:15:54.144 "progress": { 00:15:54.144 "blocks": 20480, 00:15:54.144 "percent": 32 00:15:54.144 } 00:15:54.144 }, 00:15:54.144 "base_bdevs_list": [ 00:15:54.144 { 00:15:54.144 "name": "spare", 00:15:54.144 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:54.144 "is_configured": true, 00:15:54.144 "data_offset": 2048, 00:15:54.144 "data_size": 63488 00:15:54.144 }, 00:15:54.144 { 00:15:54.144 "name": "BaseBdev2", 00:15:54.144 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:54.144 "is_configured": true, 00:15:54.144 "data_offset": 2048, 00:15:54.144 "data_size": 63488 00:15:54.144 } 00:15:54.144 ] 00:15:54.144 }' 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.144 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.403 [2024-10-01 13:50:04.364631] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.403 [2024-10-01 13:50:04.435331] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.403 [2024-10-01 13:50:04.435459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.403 [2024-10-01 13:50:04.435485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.403 [2024-10-01 13:50:04.435495] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.403 "name": "raid_bdev1", 00:15:54.403 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:54.403 "strip_size_kb": 0, 00:15:54.403 "state": "online", 00:15:54.403 "raid_level": "raid1", 00:15:54.403 "superblock": true, 00:15:54.403 "num_base_bdevs": 2, 00:15:54.403 "num_base_bdevs_discovered": 1, 00:15:54.403 "num_base_bdevs_operational": 1, 00:15:54.403 "base_bdevs_list": [ 00:15:54.403 { 00:15:54.403 "name": null, 00:15:54.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.403 "is_configured": false, 00:15:54.403 "data_offset": 0, 00:15:54.403 "data_size": 63488 00:15:54.403 }, 00:15:54.403 { 00:15:54.403 "name": "BaseBdev2", 00:15:54.403 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:54.403 "is_configured": true, 00:15:54.403 "data_offset": 2048, 00:15:54.403 "data_size": 63488 00:15:54.403 } 00:15:54.403 ] 00:15:54.403 }' 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.403 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.969 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.969 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.969 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.969 [2024-10-01 13:50:04.918361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.969 [2024-10-01 13:50:04.918671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.969 [2024-10-01 13:50:04.918712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:54.969 [2024-10-01 13:50:04.918725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.969 [2024-10-01 13:50:04.919291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.969 [2024-10-01 13:50:04.919313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.969 [2024-10-01 13:50:04.919444] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:54.969 [2024-10-01 13:50:04.919476] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.969 [2024-10-01 13:50:04.919515] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:54.969 [2024-10-01 13:50:04.919542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.969 [2024-10-01 13:50:04.936475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:54.969 spare 00:15:54.969 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.969 13:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:54.969 [2024-10-01 13:50:04.939024] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.904 "name": "raid_bdev1", 00:15:55.904 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:55.904 "strip_size_kb": 0, 00:15:55.904 "state": "online", 00:15:55.904 "raid_level": "raid1", 00:15:55.904 "superblock": true, 00:15:55.904 "num_base_bdevs": 2, 00:15:55.904 "num_base_bdevs_discovered": 2, 00:15:55.904 "num_base_bdevs_operational": 2, 00:15:55.904 "process": { 00:15:55.904 "type": "rebuild", 00:15:55.904 "target": "spare", 00:15:55.904 "progress": { 00:15:55.904 "blocks": 20480, 00:15:55.904 "percent": 32 00:15:55.904 } 00:15:55.904 }, 00:15:55.904 "base_bdevs_list": [ 00:15:55.904 { 00:15:55.904 "name": "spare", 00:15:55.904 "uuid": "ef85084f-b71a-53e8-bf90-6d8f0cbbc130", 00:15:55.904 "is_configured": true, 00:15:55.904 "data_offset": 2048, 00:15:55.904 "data_size": 63488 00:15:55.904 }, 00:15:55.904 { 00:15:55.904 "name": "BaseBdev2", 00:15:55.904 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:55.904 "is_configured": true, 00:15:55.904 "data_offset": 2048, 00:15:55.904 "data_size": 63488 00:15:55.904 } 00:15:55.904 ] 00:15:55.904 }' 00:15:55.904 13:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.904 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.904 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.904 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.904 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:55.904 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.904 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 [2024-10-01 13:50:06.082652] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.163 [2024-10-01 13:50:06.145710] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:56.163 [2024-10-01 13:50:06.145843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.163 [2024-10-01 13:50:06.145863] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.163 [2024-10-01 13:50:06.145881] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.163 "name": "raid_bdev1", 00:15:56.163 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:56.163 "strip_size_kb": 0, 00:15:56.163 "state": "online", 00:15:56.163 "raid_level": "raid1", 00:15:56.163 "superblock": true, 00:15:56.163 "num_base_bdevs": 2, 00:15:56.163 "num_base_bdevs_discovered": 1, 00:15:56.163 "num_base_bdevs_operational": 1, 00:15:56.163 "base_bdevs_list": [ 00:15:56.163 { 00:15:56.163 "name": null, 00:15:56.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.163 "is_configured": false, 00:15:56.163 "data_offset": 0, 00:15:56.163 "data_size": 63488 00:15:56.163 }, 00:15:56.163 { 00:15:56.163 "name": "BaseBdev2", 00:15:56.163 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:56.163 "is_configured": true, 00:15:56.163 "data_offset": 2048, 00:15:56.163 "data_size": 63488 00:15:56.163 } 00:15:56.163 ] 00:15:56.163 }' 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.163 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.729 "name": "raid_bdev1", 00:15:56.729 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:56.729 "strip_size_kb": 0, 00:15:56.729 "state": "online", 00:15:56.729 "raid_level": "raid1", 00:15:56.729 "superblock": true, 00:15:56.729 "num_base_bdevs": 2, 00:15:56.729 "num_base_bdevs_discovered": 1, 00:15:56.729 "num_base_bdevs_operational": 1, 00:15:56.729 "base_bdevs_list": [ 00:15:56.729 { 00:15:56.729 "name": null, 00:15:56.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.729 "is_configured": false, 00:15:56.729 "data_offset": 0, 00:15:56.729 "data_size": 63488 00:15:56.729 }, 00:15:56.729 { 00:15:56.729 "name": "BaseBdev2", 00:15:56.729 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:56.729 "is_configured": true, 00:15:56.729 "data_offset": 2048, 00:15:56.729 "data_size": 63488 00:15:56.729 } 00:15:56.729 ] 00:15:56.729 }' 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.729 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.729 [2024-10-01 13:50:06.727510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:56.729 [2024-10-01 13:50:06.727721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.729 [2024-10-01 13:50:06.727754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:56.729 [2024-10-01 13:50:06.727768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.729 [2024-10-01 13:50:06.728237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.729 [2024-10-01 13:50:06.728266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.729 [2024-10-01 13:50:06.728348] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:56.729 [2024-10-01 13:50:06.728368] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:56.729 [2024-10-01 13:50:06.728377] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:56.729 [2024-10-01 13:50:06.728393] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:56.729 BaseBdev1 00:15:56.730 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.730 13:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.667 "name": "raid_bdev1", 00:15:57.667 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:57.667 "strip_size_kb": 0, 00:15:57.667 "state": "online", 00:15:57.667 "raid_level": "raid1", 00:15:57.667 "superblock": true, 00:15:57.667 "num_base_bdevs": 2, 00:15:57.667 "num_base_bdevs_discovered": 1, 00:15:57.667 "num_base_bdevs_operational": 1, 00:15:57.667 "base_bdevs_list": [ 00:15:57.667 { 00:15:57.667 "name": null, 00:15:57.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.667 "is_configured": false, 00:15:57.667 "data_offset": 0, 00:15:57.667 "data_size": 63488 00:15:57.667 }, 00:15:57.667 { 00:15:57.667 "name": "BaseBdev2", 00:15:57.667 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:57.667 "is_configured": true, 00:15:57.667 "data_offset": 2048, 00:15:57.667 "data_size": 63488 00:15:57.667 } 00:15:57.667 ] 00:15:57.667 }' 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.667 13:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.235 "name": "raid_bdev1", 00:15:58.235 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:58.235 "strip_size_kb": 0, 00:15:58.235 "state": "online", 00:15:58.235 "raid_level": "raid1", 00:15:58.235 "superblock": true, 00:15:58.235 "num_base_bdevs": 2, 00:15:58.235 "num_base_bdevs_discovered": 1, 00:15:58.235 "num_base_bdevs_operational": 1, 00:15:58.235 "base_bdevs_list": [ 00:15:58.235 { 00:15:58.235 "name": null, 00:15:58.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.235 "is_configured": false, 00:15:58.235 "data_offset": 0, 00:15:58.235 "data_size": 63488 00:15:58.235 }, 00:15:58.235 { 00:15:58.235 "name": "BaseBdev2", 00:15:58.235 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:58.235 "is_configured": true, 00:15:58.235 "data_offset": 2048, 00:15:58.235 "data_size": 63488 00:15:58.235 } 00:15:58.235 ] 00:15:58.235 }' 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 [2024-10-01 13:50:08.334394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.235 [2024-10-01 13:50:08.335129] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:58.235 [2024-10-01 13:50:08.335158] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:58.235 request: 00:15:58.235 { 00:15:58.235 "base_bdev": "BaseBdev1", 00:15:58.235 "raid_bdev": "raid_bdev1", 00:15:58.235 "method": "bdev_raid_add_base_bdev", 00:15:58.235 "req_id": 1 00:15:58.235 } 00:15:58.235 Got JSON-RPC error response 00:15:58.235 response: 00:15:58.235 { 00:15:58.235 "code": -22, 00:15:58.235 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:58.235 } 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.235 13:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.169 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.481 "name": "raid_bdev1", 00:15:59.481 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:59.481 "strip_size_kb": 0, 00:15:59.481 "state": "online", 00:15:59.481 "raid_level": "raid1", 00:15:59.481 "superblock": true, 00:15:59.481 "num_base_bdevs": 2, 00:15:59.481 "num_base_bdevs_discovered": 1, 00:15:59.481 "num_base_bdevs_operational": 1, 00:15:59.481 "base_bdevs_list": [ 00:15:59.481 { 00:15:59.481 "name": null, 00:15:59.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.481 "is_configured": false, 00:15:59.481 "data_offset": 0, 00:15:59.481 "data_size": 63488 00:15:59.481 }, 00:15:59.481 { 00:15:59.481 "name": "BaseBdev2", 00:15:59.481 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:59.481 "is_configured": true, 00:15:59.481 "data_offset": 2048, 00:15:59.481 "data_size": 63488 00:15:59.481 } 00:15:59.481 ] 00:15:59.481 }' 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.481 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.776 "name": "raid_bdev1", 00:15:59.776 "uuid": "6b05f3d8-d1b5-497e-b278-fcf48581f3f7", 00:15:59.776 "strip_size_kb": 0, 00:15:59.776 "state": "online", 00:15:59.776 "raid_level": "raid1", 00:15:59.776 "superblock": true, 00:15:59.776 "num_base_bdevs": 2, 00:15:59.776 "num_base_bdevs_discovered": 1, 00:15:59.776 "num_base_bdevs_operational": 1, 00:15:59.776 "base_bdevs_list": [ 00:15:59.776 { 00:15:59.776 "name": null, 00:15:59.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.776 "is_configured": false, 00:15:59.776 "data_offset": 0, 00:15:59.776 "data_size": 63488 00:15:59.776 }, 00:15:59.776 { 00:15:59.776 "name": "BaseBdev2", 00:15:59.776 "uuid": "c7938ae4-757b-540f-bfb6-8d59e1934fe7", 00:15:59.776 "is_configured": true, 00:15:59.776 "data_offset": 2048, 00:15:59.776 "data_size": 63488 00:15:59.776 } 00:15:59.776 ] 00:15:59.776 }' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76854 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76854 ']' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76854 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76854 00:15:59.776 killing process with pid 76854 00:15:59.776 Received shutdown signal, test time was about 16.951818 seconds 00:15:59.776 00:15:59.776 Latency(us) 00:15:59.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.776 =================================================================================================================== 00:15:59.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76854' 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76854 00:15:59.776 [2024-10-01 13:50:09.919375] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.776 [2024-10-01 13:50:09.919537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.776 13:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76854 00:15:59.776 [2024-10-01 13:50:09.919618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.776 [2024-10-01 13:50:09.919630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:00.034 [2024-10-01 13:50:10.156164] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:01.412 00:16:01.412 real 0m20.392s 00:16:01.412 user 0m26.472s 00:16:01.412 sys 0m2.516s 00:16:01.412 ************************************ 00:16:01.412 END TEST raid_rebuild_test_sb_io 00:16:01.412 ************************************ 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.412 13:50:11 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:01.412 13:50:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:01.412 13:50:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:01.412 13:50:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.412 13:50:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.412 ************************************ 00:16:01.412 START TEST raid_rebuild_test 00:16:01.412 ************************************ 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:01.412 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77537 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77537 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77537 ']' 00:16:01.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.413 13:50:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.672 [2024-10-01 13:50:11.717285] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:01.672 [2024-10-01 13:50:11.717656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:01.672 Zero copy mechanism will not be used. 00:16:01.672 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77537 ] 00:16:01.932 [2024-10-01 13:50:11.900705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.932 [2024-10-01 13:50:12.122285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.192 [2024-10-01 13:50:12.342327] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.192 [2024-10-01 13:50:12.342372] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.456 BaseBdev1_malloc 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.456 [2024-10-01 13:50:12.618128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:02.456 [2024-10-01 13:50:12.618441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.456 [2024-10-01 13:50:12.618479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.456 [2024-10-01 13:50:12.618500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.456 [2024-10-01 13:50:12.621202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.456 [2024-10-01 13:50:12.621253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:02.456 BaseBdev1 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.456 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 BaseBdev2_malloc 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 [2024-10-01 13:50:12.690712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:02.717 [2024-10-01 13:50:12.690799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.717 [2024-10-01 13:50:12.690822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:02.717 [2024-10-01 13:50:12.690839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.717 [2024-10-01 13:50:12.693459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.717 [2024-10-01 13:50:12.693506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:02.717 BaseBdev2 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 BaseBdev3_malloc 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 [2024-10-01 13:50:12.749725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:02.717 [2024-10-01 13:50:12.749803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.717 [2024-10-01 13:50:12.749828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.717 [2024-10-01 13:50:12.749843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.717 [2024-10-01 13:50:12.752563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.717 [2024-10-01 13:50:12.752615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:02.717 BaseBdev3 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 BaseBdev4_malloc 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 [2024-10-01 13:50:12.809256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:02.717 [2024-10-01 13:50:12.809349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.717 [2024-10-01 13:50:12.809378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:02.717 [2024-10-01 13:50:12.809394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.717 [2024-10-01 13:50:12.812058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.717 [2024-10-01 13:50:12.812110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:02.717 BaseBdev4 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 spare_malloc 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 spare_delay 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 [2024-10-01 13:50:12.882967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:02.717 [2024-10-01 13:50:12.883059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.717 [2024-10-01 13:50:12.883089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:02.717 [2024-10-01 13:50:12.883105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.718 [2024-10-01 13:50:12.885777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.718 [2024-10-01 13:50:12.885825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:02.718 spare 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.718 [2024-10-01 13:50:12.895026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.718 [2024-10-01 13:50:12.897294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.718 [2024-10-01 13:50:12.897379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.718 [2024-10-01 13:50:12.897447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:02.718 [2024-10-01 13:50:12.897545] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:02.718 [2024-10-01 13:50:12.897560] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:02.718 [2024-10-01 13:50:12.897901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:02.718 [2024-10-01 13:50:12.898095] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:02.718 [2024-10-01 13:50:12.898106] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:02.718 [2024-10-01 13:50:12.898306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.718 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.977 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.977 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.977 "name": "raid_bdev1", 00:16:02.977 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:02.977 "strip_size_kb": 0, 00:16:02.977 "state": "online", 00:16:02.977 "raid_level": "raid1", 00:16:02.977 "superblock": false, 00:16:02.977 "num_base_bdevs": 4, 00:16:02.977 "num_base_bdevs_discovered": 4, 00:16:02.977 "num_base_bdevs_operational": 4, 00:16:02.977 "base_bdevs_list": [ 00:16:02.977 { 00:16:02.977 "name": "BaseBdev1", 00:16:02.977 "uuid": "eae7dd4f-a8cd-5775-8afe-7ecc7a1a614e", 00:16:02.977 "is_configured": true, 00:16:02.977 "data_offset": 0, 00:16:02.977 "data_size": 65536 00:16:02.977 }, 00:16:02.977 { 00:16:02.977 "name": "BaseBdev2", 00:16:02.977 "uuid": "345d772c-440a-57ff-ae85-3fdc9103da18", 00:16:02.977 "is_configured": true, 00:16:02.977 "data_offset": 0, 00:16:02.977 "data_size": 65536 00:16:02.977 }, 00:16:02.977 { 00:16:02.977 "name": "BaseBdev3", 00:16:02.978 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:02.978 "is_configured": true, 00:16:02.978 "data_offset": 0, 00:16:02.978 "data_size": 65536 00:16:02.978 }, 00:16:02.978 { 00:16:02.978 "name": "BaseBdev4", 00:16:02.978 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:02.978 "is_configured": true, 00:16:02.978 "data_offset": 0, 00:16:02.978 "data_size": 65536 00:16:02.978 } 00:16:02.978 ] 00:16:02.978 }' 00:16:02.978 13:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.978 13:50:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.237 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.237 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.237 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.237 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:03.237 [2024-10-01 13:50:13.338841] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.237 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.237 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:03.238 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.238 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:03.238 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.238 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.238 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:03.497 [2024-10-01 13:50:13.642119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:03.497 /dev/nbd0 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.497 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.757 1+0 records in 00:16:03.757 1+0 records out 00:16:03.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458176 s, 8.9 MB/s 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:03.757 13:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:10.319 65536+0 records in 00:16:10.319 65536+0 records out 00:16:10.319 33554432 bytes (34 MB, 32 MiB) copied, 6.57734 s, 5.1 MB/s 00:16:10.319 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:10.319 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.319 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:10.319 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.319 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:10.319 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.319 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.319 [2024-10-01 13:50:20.509919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.578 [2024-10-01 13:50:20.546147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.578 "name": "raid_bdev1", 00:16:10.578 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:10.578 "strip_size_kb": 0, 00:16:10.578 "state": "online", 00:16:10.578 "raid_level": "raid1", 00:16:10.578 "superblock": false, 00:16:10.578 "num_base_bdevs": 4, 00:16:10.578 "num_base_bdevs_discovered": 3, 00:16:10.578 "num_base_bdevs_operational": 3, 00:16:10.578 "base_bdevs_list": [ 00:16:10.578 { 00:16:10.578 "name": null, 00:16:10.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.578 "is_configured": false, 00:16:10.578 "data_offset": 0, 00:16:10.578 "data_size": 65536 00:16:10.578 }, 00:16:10.578 { 00:16:10.578 "name": "BaseBdev2", 00:16:10.578 "uuid": "345d772c-440a-57ff-ae85-3fdc9103da18", 00:16:10.578 "is_configured": true, 00:16:10.578 "data_offset": 0, 00:16:10.578 "data_size": 65536 00:16:10.578 }, 00:16:10.578 { 00:16:10.578 "name": "BaseBdev3", 00:16:10.578 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:10.578 "is_configured": true, 00:16:10.578 "data_offset": 0, 00:16:10.578 "data_size": 65536 00:16:10.578 }, 00:16:10.578 { 00:16:10.578 "name": "BaseBdev4", 00:16:10.578 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:10.578 "is_configured": true, 00:16:10.578 "data_offset": 0, 00:16:10.578 "data_size": 65536 00:16:10.578 } 00:16:10.578 ] 00:16:10.578 }' 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.578 13:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.837 13:50:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.837 13:50:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.837 13:50:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.837 [2024-10-01 13:50:21.021674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.096 [2024-10-01 13:50:21.037622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:11.096 13:50:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.096 13:50:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:11.096 [2024-10-01 13:50:21.040283] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.050 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.050 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.050 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.050 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.050 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.051 "name": "raid_bdev1", 00:16:12.051 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:12.051 "strip_size_kb": 0, 00:16:12.051 "state": "online", 00:16:12.051 "raid_level": "raid1", 00:16:12.051 "superblock": false, 00:16:12.051 "num_base_bdevs": 4, 00:16:12.051 "num_base_bdevs_discovered": 4, 00:16:12.051 "num_base_bdevs_operational": 4, 00:16:12.051 "process": { 00:16:12.051 "type": "rebuild", 00:16:12.051 "target": "spare", 00:16:12.051 "progress": { 00:16:12.051 "blocks": 20480, 00:16:12.051 "percent": 31 00:16:12.051 } 00:16:12.051 }, 00:16:12.051 "base_bdevs_list": [ 00:16:12.051 { 00:16:12.051 "name": "spare", 00:16:12.051 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:12.051 "is_configured": true, 00:16:12.051 "data_offset": 0, 00:16:12.051 "data_size": 65536 00:16:12.051 }, 00:16:12.051 { 00:16:12.051 "name": "BaseBdev2", 00:16:12.051 "uuid": "345d772c-440a-57ff-ae85-3fdc9103da18", 00:16:12.051 "is_configured": true, 00:16:12.051 "data_offset": 0, 00:16:12.051 "data_size": 65536 00:16:12.051 }, 00:16:12.051 { 00:16:12.051 "name": "BaseBdev3", 00:16:12.051 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:12.051 "is_configured": true, 00:16:12.051 "data_offset": 0, 00:16:12.051 "data_size": 65536 00:16:12.051 }, 00:16:12.051 { 00:16:12.051 "name": "BaseBdev4", 00:16:12.051 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:12.051 "is_configured": true, 00:16:12.051 "data_offset": 0, 00:16:12.051 "data_size": 65536 00:16:12.051 } 00:16:12.051 ] 00:16:12.051 }' 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.051 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.051 [2024-10-01 13:50:22.195730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.310 [2024-10-01 13:50:22.251310] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.310 [2024-10-01 13:50:22.251462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.310 [2024-10-01 13:50:22.251487] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.310 [2024-10-01 13:50:22.251501] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.310 "name": "raid_bdev1", 00:16:12.310 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:12.310 "strip_size_kb": 0, 00:16:12.310 "state": "online", 00:16:12.310 "raid_level": "raid1", 00:16:12.310 "superblock": false, 00:16:12.310 "num_base_bdevs": 4, 00:16:12.310 "num_base_bdevs_discovered": 3, 00:16:12.310 "num_base_bdevs_operational": 3, 00:16:12.310 "base_bdevs_list": [ 00:16:12.310 { 00:16:12.310 "name": null, 00:16:12.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.310 "is_configured": false, 00:16:12.310 "data_offset": 0, 00:16:12.310 "data_size": 65536 00:16:12.310 }, 00:16:12.310 { 00:16:12.310 "name": "BaseBdev2", 00:16:12.310 "uuid": "345d772c-440a-57ff-ae85-3fdc9103da18", 00:16:12.310 "is_configured": true, 00:16:12.310 "data_offset": 0, 00:16:12.310 "data_size": 65536 00:16:12.310 }, 00:16:12.310 { 00:16:12.310 "name": "BaseBdev3", 00:16:12.310 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:12.310 "is_configured": true, 00:16:12.310 "data_offset": 0, 00:16:12.310 "data_size": 65536 00:16:12.310 }, 00:16:12.310 { 00:16:12.310 "name": "BaseBdev4", 00:16:12.310 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:12.310 "is_configured": true, 00:16:12.310 "data_offset": 0, 00:16:12.310 "data_size": 65536 00:16:12.310 } 00:16:12.310 ] 00:16:12.310 }' 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.310 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.569 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.828 "name": "raid_bdev1", 00:16:12.828 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:12.828 "strip_size_kb": 0, 00:16:12.828 "state": "online", 00:16:12.828 "raid_level": "raid1", 00:16:12.828 "superblock": false, 00:16:12.828 "num_base_bdevs": 4, 00:16:12.828 "num_base_bdevs_discovered": 3, 00:16:12.828 "num_base_bdevs_operational": 3, 00:16:12.828 "base_bdevs_list": [ 00:16:12.828 { 00:16:12.828 "name": null, 00:16:12.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.828 "is_configured": false, 00:16:12.828 "data_offset": 0, 00:16:12.828 "data_size": 65536 00:16:12.828 }, 00:16:12.828 { 00:16:12.828 "name": "BaseBdev2", 00:16:12.828 "uuid": "345d772c-440a-57ff-ae85-3fdc9103da18", 00:16:12.828 "is_configured": true, 00:16:12.828 "data_offset": 0, 00:16:12.828 "data_size": 65536 00:16:12.828 }, 00:16:12.828 { 00:16:12.828 "name": "BaseBdev3", 00:16:12.828 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:12.828 "is_configured": true, 00:16:12.828 "data_offset": 0, 00:16:12.828 "data_size": 65536 00:16:12.828 }, 00:16:12.828 { 00:16:12.828 "name": "BaseBdev4", 00:16:12.828 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:12.828 "is_configured": true, 00:16:12.828 "data_offset": 0, 00:16:12.828 "data_size": 65536 00:16:12.828 } 00:16:12.828 ] 00:16:12.828 }' 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.828 [2024-10-01 13:50:22.841769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.828 [2024-10-01 13:50:22.857163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.828 13:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:12.828 [2024-10-01 13:50:22.859737] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.764 "name": "raid_bdev1", 00:16:13.764 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:13.764 "strip_size_kb": 0, 00:16:13.764 "state": "online", 00:16:13.764 "raid_level": "raid1", 00:16:13.764 "superblock": false, 00:16:13.764 "num_base_bdevs": 4, 00:16:13.764 "num_base_bdevs_discovered": 4, 00:16:13.764 "num_base_bdevs_operational": 4, 00:16:13.764 "process": { 00:16:13.764 "type": "rebuild", 00:16:13.764 "target": "spare", 00:16:13.764 "progress": { 00:16:13.764 "blocks": 20480, 00:16:13.764 "percent": 31 00:16:13.764 } 00:16:13.764 }, 00:16:13.764 "base_bdevs_list": [ 00:16:13.764 { 00:16:13.764 "name": "spare", 00:16:13.764 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:13.764 "is_configured": true, 00:16:13.764 "data_offset": 0, 00:16:13.764 "data_size": 65536 00:16:13.764 }, 00:16:13.764 { 00:16:13.764 "name": "BaseBdev2", 00:16:13.764 "uuid": "345d772c-440a-57ff-ae85-3fdc9103da18", 00:16:13.764 "is_configured": true, 00:16:13.764 "data_offset": 0, 00:16:13.764 "data_size": 65536 00:16:13.764 }, 00:16:13.764 { 00:16:13.764 "name": "BaseBdev3", 00:16:13.764 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:13.764 "is_configured": true, 00:16:13.764 "data_offset": 0, 00:16:13.764 "data_size": 65536 00:16:13.764 }, 00:16:13.764 { 00:16:13.764 "name": "BaseBdev4", 00:16:13.764 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:13.764 "is_configured": true, 00:16:13.764 "data_offset": 0, 00:16:13.764 "data_size": 65536 00:16:13.764 } 00:16:13.764 ] 00:16:13.764 }' 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.764 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.024 13:50:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 [2024-10-01 13:50:23.992907] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.024 [2024-10-01 13:50:24.070717] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.024 "name": "raid_bdev1", 00:16:14.024 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:14.024 "strip_size_kb": 0, 00:16:14.024 "state": "online", 00:16:14.024 "raid_level": "raid1", 00:16:14.024 "superblock": false, 00:16:14.024 "num_base_bdevs": 4, 00:16:14.024 "num_base_bdevs_discovered": 3, 00:16:14.024 "num_base_bdevs_operational": 3, 00:16:14.024 "process": { 00:16:14.024 "type": "rebuild", 00:16:14.024 "target": "spare", 00:16:14.024 "progress": { 00:16:14.024 "blocks": 24576, 00:16:14.024 "percent": 37 00:16:14.024 } 00:16:14.024 }, 00:16:14.024 "base_bdevs_list": [ 00:16:14.024 { 00:16:14.024 "name": "spare", 00:16:14.024 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:14.024 "is_configured": true, 00:16:14.024 "data_offset": 0, 00:16:14.024 "data_size": 65536 00:16:14.024 }, 00:16:14.024 { 00:16:14.024 "name": null, 00:16:14.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.024 "is_configured": false, 00:16:14.024 "data_offset": 0, 00:16:14.024 "data_size": 65536 00:16:14.024 }, 00:16:14.024 { 00:16:14.024 "name": "BaseBdev3", 00:16:14.024 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:14.024 "is_configured": true, 00:16:14.024 "data_offset": 0, 00:16:14.024 "data_size": 65536 00:16:14.024 }, 00:16:14.024 { 00:16:14.024 "name": "BaseBdev4", 00:16:14.024 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:14.024 "is_configured": true, 00:16:14.024 "data_offset": 0, 00:16:14.024 "data_size": 65536 00:16:14.024 } 00:16:14.024 ] 00:16:14.024 }' 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.024 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.283 "name": "raid_bdev1", 00:16:14.283 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:14.283 "strip_size_kb": 0, 00:16:14.283 "state": "online", 00:16:14.283 "raid_level": "raid1", 00:16:14.283 "superblock": false, 00:16:14.283 "num_base_bdevs": 4, 00:16:14.283 "num_base_bdevs_discovered": 3, 00:16:14.283 "num_base_bdevs_operational": 3, 00:16:14.283 "process": { 00:16:14.283 "type": "rebuild", 00:16:14.283 "target": "spare", 00:16:14.283 "progress": { 00:16:14.283 "blocks": 26624, 00:16:14.283 "percent": 40 00:16:14.283 } 00:16:14.283 }, 00:16:14.283 "base_bdevs_list": [ 00:16:14.283 { 00:16:14.283 "name": "spare", 00:16:14.283 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:14.283 "is_configured": true, 00:16:14.283 "data_offset": 0, 00:16:14.283 "data_size": 65536 00:16:14.283 }, 00:16:14.283 { 00:16:14.283 "name": null, 00:16:14.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.283 "is_configured": false, 00:16:14.283 "data_offset": 0, 00:16:14.283 "data_size": 65536 00:16:14.283 }, 00:16:14.283 { 00:16:14.283 "name": "BaseBdev3", 00:16:14.283 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:14.283 "is_configured": true, 00:16:14.283 "data_offset": 0, 00:16:14.283 "data_size": 65536 00:16:14.283 }, 00:16:14.283 { 00:16:14.283 "name": "BaseBdev4", 00:16:14.283 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:14.283 "is_configured": true, 00:16:14.283 "data_offset": 0, 00:16:14.283 "data_size": 65536 00:16:14.283 } 00:16:14.283 ] 00:16:14.283 }' 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.283 13:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.221 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.221 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.221 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.221 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.221 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.221 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.221 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.222 "name": "raid_bdev1", 00:16:15.222 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:15.222 "strip_size_kb": 0, 00:16:15.222 "state": "online", 00:16:15.222 "raid_level": "raid1", 00:16:15.222 "superblock": false, 00:16:15.222 "num_base_bdevs": 4, 00:16:15.222 "num_base_bdevs_discovered": 3, 00:16:15.222 "num_base_bdevs_operational": 3, 00:16:15.222 "process": { 00:16:15.222 "type": "rebuild", 00:16:15.222 "target": "spare", 00:16:15.222 "progress": { 00:16:15.222 "blocks": 49152, 00:16:15.222 "percent": 75 00:16:15.222 } 00:16:15.222 }, 00:16:15.222 "base_bdevs_list": [ 00:16:15.222 { 00:16:15.222 "name": "spare", 00:16:15.222 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:15.222 "is_configured": true, 00:16:15.222 "data_offset": 0, 00:16:15.222 "data_size": 65536 00:16:15.222 }, 00:16:15.222 { 00:16:15.222 "name": null, 00:16:15.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.222 "is_configured": false, 00:16:15.222 "data_offset": 0, 00:16:15.222 "data_size": 65536 00:16:15.222 }, 00:16:15.222 { 00:16:15.222 "name": "BaseBdev3", 00:16:15.222 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:15.222 "is_configured": true, 00:16:15.222 "data_offset": 0, 00:16:15.222 "data_size": 65536 00:16:15.222 }, 00:16:15.222 { 00:16:15.222 "name": "BaseBdev4", 00:16:15.222 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:15.222 "is_configured": true, 00:16:15.222 "data_offset": 0, 00:16:15.222 "data_size": 65536 00:16:15.222 } 00:16:15.222 ] 00:16:15.222 }' 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.222 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.586 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.586 13:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.155 [2024-10-01 13:50:26.088547] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:16.155 [2024-10-01 13:50:26.088678] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:16.155 [2024-10-01 13:50:26.088749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.414 "name": "raid_bdev1", 00:16:16.414 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:16.414 "strip_size_kb": 0, 00:16:16.414 "state": "online", 00:16:16.414 "raid_level": "raid1", 00:16:16.414 "superblock": false, 00:16:16.414 "num_base_bdevs": 4, 00:16:16.414 "num_base_bdevs_discovered": 3, 00:16:16.414 "num_base_bdevs_operational": 3, 00:16:16.414 "base_bdevs_list": [ 00:16:16.414 { 00:16:16.414 "name": "spare", 00:16:16.414 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:16.414 "is_configured": true, 00:16:16.414 "data_offset": 0, 00:16:16.414 "data_size": 65536 00:16:16.414 }, 00:16:16.414 { 00:16:16.414 "name": null, 00:16:16.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.414 "is_configured": false, 00:16:16.414 "data_offset": 0, 00:16:16.414 "data_size": 65536 00:16:16.414 }, 00:16:16.414 { 00:16:16.414 "name": "BaseBdev3", 00:16:16.414 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:16.414 "is_configured": true, 00:16:16.414 "data_offset": 0, 00:16:16.414 "data_size": 65536 00:16:16.414 }, 00:16:16.414 { 00:16:16.414 "name": "BaseBdev4", 00:16:16.414 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:16.414 "is_configured": true, 00:16:16.414 "data_offset": 0, 00:16:16.414 "data_size": 65536 00:16:16.414 } 00:16:16.414 ] 00:16:16.414 }' 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.414 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.415 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.674 "name": "raid_bdev1", 00:16:16.674 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:16.674 "strip_size_kb": 0, 00:16:16.674 "state": "online", 00:16:16.674 "raid_level": "raid1", 00:16:16.674 "superblock": false, 00:16:16.674 "num_base_bdevs": 4, 00:16:16.674 "num_base_bdevs_discovered": 3, 00:16:16.674 "num_base_bdevs_operational": 3, 00:16:16.674 "base_bdevs_list": [ 00:16:16.674 { 00:16:16.674 "name": "spare", 00:16:16.674 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:16.674 "is_configured": true, 00:16:16.674 "data_offset": 0, 00:16:16.674 "data_size": 65536 00:16:16.674 }, 00:16:16.674 { 00:16:16.674 "name": null, 00:16:16.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.674 "is_configured": false, 00:16:16.674 "data_offset": 0, 00:16:16.674 "data_size": 65536 00:16:16.674 }, 00:16:16.674 { 00:16:16.674 "name": "BaseBdev3", 00:16:16.674 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:16.674 "is_configured": true, 00:16:16.674 "data_offset": 0, 00:16:16.674 "data_size": 65536 00:16:16.674 }, 00:16:16.674 { 00:16:16.674 "name": "BaseBdev4", 00:16:16.674 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:16.674 "is_configured": true, 00:16:16.674 "data_offset": 0, 00:16:16.674 "data_size": 65536 00:16:16.674 } 00:16:16.674 ] 00:16:16.674 }' 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.674 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.675 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.675 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.675 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.675 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.675 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.675 "name": "raid_bdev1", 00:16:16.675 "uuid": "bfaad870-e009-4b7b-9d45-3984f0550e71", 00:16:16.675 "strip_size_kb": 0, 00:16:16.675 "state": "online", 00:16:16.675 "raid_level": "raid1", 00:16:16.675 "superblock": false, 00:16:16.675 "num_base_bdevs": 4, 00:16:16.675 "num_base_bdevs_discovered": 3, 00:16:16.675 "num_base_bdevs_operational": 3, 00:16:16.675 "base_bdevs_list": [ 00:16:16.675 { 00:16:16.675 "name": "spare", 00:16:16.675 "uuid": "c681a7ed-bec3-59d4-8126-e49585558eda", 00:16:16.675 "is_configured": true, 00:16:16.675 "data_offset": 0, 00:16:16.675 "data_size": 65536 00:16:16.675 }, 00:16:16.675 { 00:16:16.675 "name": null, 00:16:16.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.675 "is_configured": false, 00:16:16.675 "data_offset": 0, 00:16:16.675 "data_size": 65536 00:16:16.675 }, 00:16:16.675 { 00:16:16.675 "name": "BaseBdev3", 00:16:16.675 "uuid": "7f5d5471-6db4-5cb9-af8e-ac673b147719", 00:16:16.675 "is_configured": true, 00:16:16.675 "data_offset": 0, 00:16:16.675 "data_size": 65536 00:16:16.675 }, 00:16:16.675 { 00:16:16.675 "name": "BaseBdev4", 00:16:16.675 "uuid": "4000e185-f08b-5bfa-9ab5-a261301d75f3", 00:16:16.675 "is_configured": true, 00:16:16.675 "data_offset": 0, 00:16:16.675 "data_size": 65536 00:16:16.675 } 00:16:16.675 ] 00:16:16.675 }' 00:16:16.675 13:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.675 13:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.935 [2024-10-01 13:50:27.070195] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.935 [2024-10-01 13:50:27.070255] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.935 [2024-10-01 13:50:27.070371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.935 [2024-10-01 13:50:27.070496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.935 [2024-10-01 13:50:27.070511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:16.935 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:17.194 /dev/nbd0 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:17.469 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.469 1+0 records in 00:16:17.469 1+0 records out 00:16:17.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394535 s, 10.4 MB/s 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.470 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:17.728 /dev/nbd1 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.728 1+0 records in 00:16:17.728 1+0 records out 00:16:17.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632177 s, 6.5 MB/s 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.728 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:17.729 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:17.729 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.729 13:50:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.988 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77537 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77537 ']' 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77537 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77537 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.246 killing process with pid 77537 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77537' 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77537 00:16:18.246 Received shutdown signal, test time was about 60.000000 seconds 00:16:18.246 00:16:18.246 Latency(us) 00:16:18.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.246 =================================================================================================================== 00:16:18.246 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.246 [2024-10-01 13:50:28.434412] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.246 13:50:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77537 00:16:18.872 [2024-10-01 13:50:28.934406] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.247 13:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:20.247 00:16:20.247 real 0m18.629s 00:16:20.247 user 0m19.912s 00:16:20.247 sys 0m3.929s 00:16:20.247 13:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.248 ************************************ 00:16:20.248 END TEST raid_rebuild_test 00:16:20.248 ************************************ 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.248 13:50:30 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:16:20.248 13:50:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:20.248 13:50:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.248 13:50:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.248 ************************************ 00:16:20.248 START TEST raid_rebuild_test_sb 00:16:20.248 ************************************ 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78000 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78000 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78000 ']' 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.248 13:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.248 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:20.248 Zero copy mechanism will not be used. 00:16:20.248 [2024-10-01 13:50:30.385437] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:20.248 [2024-10-01 13:50:30.385568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78000 ] 00:16:20.507 [2024-10-01 13:50:30.557385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.765 [2024-10-01 13:50:30.791949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.024 [2024-10-01 13:50:31.019768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.024 [2024-10-01 13:50:31.019846] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 BaseBdev1_malloc 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 [2024-10-01 13:50:31.305769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:21.283 [2024-10-01 13:50:31.305851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.283 [2024-10-01 13:50:31.305880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:21.283 [2024-10-01 13:50:31.305899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.283 [2024-10-01 13:50:31.308534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.283 [2024-10-01 13:50:31.308577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.283 BaseBdev1 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 BaseBdev2_malloc 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 [2024-10-01 13:50:31.382921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:21.283 [2024-10-01 13:50:31.383004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.283 [2024-10-01 13:50:31.383028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:21.283 [2024-10-01 13:50:31.383042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.283 [2024-10-01 13:50:31.385627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.283 [2024-10-01 13:50:31.385671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:21.283 BaseBdev2 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 BaseBdev3_malloc 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 [2024-10-01 13:50:31.442114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:21.283 [2024-10-01 13:50:31.442194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.283 [2024-10-01 13:50:31.442222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:21.283 [2024-10-01 13:50:31.442237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.283 [2024-10-01 13:50:31.444846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.283 [2024-10-01 13:50:31.444892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:21.283 BaseBdev3 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.283 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.541 BaseBdev4_malloc 00:16:21.541 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.541 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.542 [2024-10-01 13:50:31.500232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:21.542 [2024-10-01 13:50:31.500321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.542 [2024-10-01 13:50:31.500346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:21.542 [2024-10-01 13:50:31.500361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.542 [2024-10-01 13:50:31.502866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.542 [2024-10-01 13:50:31.502919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:21.542 BaseBdev4 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.542 spare_malloc 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.542 spare_delay 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.542 [2024-10-01 13:50:31.569492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.542 [2024-10-01 13:50:31.569598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.542 [2024-10-01 13:50:31.569626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:21.542 [2024-10-01 13:50:31.569642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.542 [2024-10-01 13:50:31.572220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.542 [2024-10-01 13:50:31.572268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.542 spare 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.542 [2024-10-01 13:50:31.581539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.542 [2024-10-01 13:50:31.583667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.542 [2024-10-01 13:50:31.583751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.542 [2024-10-01 13:50:31.583806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:21.542 [2024-10-01 13:50:31.584007] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:21.542 [2024-10-01 13:50:31.584023] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.542 [2024-10-01 13:50:31.584328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:21.542 [2024-10-01 13:50:31.584528] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:21.542 [2024-10-01 13:50:31.584541] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:21.542 [2024-10-01 13:50:31.584724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.542 "name": "raid_bdev1", 00:16:21.542 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:21.542 "strip_size_kb": 0, 00:16:21.542 "state": "online", 00:16:21.542 "raid_level": "raid1", 00:16:21.542 "superblock": true, 00:16:21.542 "num_base_bdevs": 4, 00:16:21.542 "num_base_bdevs_discovered": 4, 00:16:21.542 "num_base_bdevs_operational": 4, 00:16:21.542 "base_bdevs_list": [ 00:16:21.542 { 00:16:21.542 "name": "BaseBdev1", 00:16:21.542 "uuid": "6a83dd7b-f772-51e1-8705-a1fcf027b05d", 00:16:21.542 "is_configured": true, 00:16:21.542 "data_offset": 2048, 00:16:21.542 "data_size": 63488 00:16:21.542 }, 00:16:21.542 { 00:16:21.542 "name": "BaseBdev2", 00:16:21.542 "uuid": "4e395552-c593-50fb-9b0a-d38349e487f6", 00:16:21.542 "is_configured": true, 00:16:21.542 "data_offset": 2048, 00:16:21.542 "data_size": 63488 00:16:21.542 }, 00:16:21.542 { 00:16:21.542 "name": "BaseBdev3", 00:16:21.542 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:21.542 "is_configured": true, 00:16:21.542 "data_offset": 2048, 00:16:21.542 "data_size": 63488 00:16:21.542 }, 00:16:21.542 { 00:16:21.542 "name": "BaseBdev4", 00:16:21.542 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:21.542 "is_configured": true, 00:16:21.542 "data_offset": 2048, 00:16:21.542 "data_size": 63488 00:16:21.542 } 00:16:21.542 ] 00:16:21.542 }' 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.542 13:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.109 [2024-10-01 13:50:32.041128] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.109 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:22.368 [2024-10-01 13:50:32.300541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:22.368 /dev/nbd0 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.368 1+0 records in 00:16:22.368 1+0 records out 00:16:22.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465842 s, 8.8 MB/s 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:22.368 13:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:28.967 63488+0 records in 00:16:28.967 63488+0 records out 00:16:28.967 32505856 bytes (33 MB, 31 MiB) copied, 6.25641 s, 5.2 MB/s 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:28.967 [2024-10-01 13:50:38.830136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.967 [2024-10-01 13:50:38.874189] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.967 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.967 "name": "raid_bdev1", 00:16:28.967 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:28.967 "strip_size_kb": 0, 00:16:28.967 "state": "online", 00:16:28.967 "raid_level": "raid1", 00:16:28.967 "superblock": true, 00:16:28.967 "num_base_bdevs": 4, 00:16:28.968 "num_base_bdevs_discovered": 3, 00:16:28.968 "num_base_bdevs_operational": 3, 00:16:28.968 "base_bdevs_list": [ 00:16:28.968 { 00:16:28.968 "name": null, 00:16:28.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.968 "is_configured": false, 00:16:28.968 "data_offset": 0, 00:16:28.968 "data_size": 63488 00:16:28.968 }, 00:16:28.968 { 00:16:28.968 "name": "BaseBdev2", 00:16:28.968 "uuid": "4e395552-c593-50fb-9b0a-d38349e487f6", 00:16:28.968 "is_configured": true, 00:16:28.968 "data_offset": 2048, 00:16:28.968 "data_size": 63488 00:16:28.968 }, 00:16:28.968 { 00:16:28.968 "name": "BaseBdev3", 00:16:28.968 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:28.968 "is_configured": true, 00:16:28.968 "data_offset": 2048, 00:16:28.968 "data_size": 63488 00:16:28.968 }, 00:16:28.968 { 00:16:28.968 "name": "BaseBdev4", 00:16:28.968 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:28.968 "is_configured": true, 00:16:28.968 "data_offset": 2048, 00:16:28.968 "data_size": 63488 00:16:28.968 } 00:16:28.968 ] 00:16:28.968 }' 00:16:28.968 13:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.968 13:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.229 13:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.229 13:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.229 13:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.229 [2024-10-01 13:50:39.313587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.229 [2024-10-01 13:50:39.328807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:16:29.229 13:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.229 13:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:29.229 [2024-10-01 13:50:39.330969] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.166 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.426 "name": "raid_bdev1", 00:16:30.426 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:30.426 "strip_size_kb": 0, 00:16:30.426 "state": "online", 00:16:30.426 "raid_level": "raid1", 00:16:30.426 "superblock": true, 00:16:30.426 "num_base_bdevs": 4, 00:16:30.426 "num_base_bdevs_discovered": 4, 00:16:30.426 "num_base_bdevs_operational": 4, 00:16:30.426 "process": { 00:16:30.426 "type": "rebuild", 00:16:30.426 "target": "spare", 00:16:30.426 "progress": { 00:16:30.426 "blocks": 20480, 00:16:30.426 "percent": 32 00:16:30.426 } 00:16:30.426 }, 00:16:30.426 "base_bdevs_list": [ 00:16:30.426 { 00:16:30.426 "name": "spare", 00:16:30.426 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:30.426 "is_configured": true, 00:16:30.426 "data_offset": 2048, 00:16:30.426 "data_size": 63488 00:16:30.426 }, 00:16:30.426 { 00:16:30.426 "name": "BaseBdev2", 00:16:30.426 "uuid": "4e395552-c593-50fb-9b0a-d38349e487f6", 00:16:30.426 "is_configured": true, 00:16:30.426 "data_offset": 2048, 00:16:30.426 "data_size": 63488 00:16:30.426 }, 00:16:30.426 { 00:16:30.426 "name": "BaseBdev3", 00:16:30.426 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:30.426 "is_configured": true, 00:16:30.426 "data_offset": 2048, 00:16:30.426 "data_size": 63488 00:16:30.426 }, 00:16:30.426 { 00:16:30.426 "name": "BaseBdev4", 00:16:30.426 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:30.426 "is_configured": true, 00:16:30.426 "data_offset": 2048, 00:16:30.426 "data_size": 63488 00:16:30.426 } 00:16:30.426 ] 00:16:30.426 }' 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 [2024-10-01 13:50:40.459629] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:30.426 [2024-10-01 13:50:40.537257] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:30.426 [2024-10-01 13:50:40.537350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.426 [2024-10-01 13:50:40.537370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:30.426 [2024-10-01 13:50:40.537383] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.426 "name": "raid_bdev1", 00:16:30.426 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:30.426 "strip_size_kb": 0, 00:16:30.426 "state": "online", 00:16:30.426 "raid_level": "raid1", 00:16:30.426 "superblock": true, 00:16:30.426 "num_base_bdevs": 4, 00:16:30.426 "num_base_bdevs_discovered": 3, 00:16:30.426 "num_base_bdevs_operational": 3, 00:16:30.426 "base_bdevs_list": [ 00:16:30.426 { 00:16:30.426 "name": null, 00:16:30.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.426 "is_configured": false, 00:16:30.426 "data_offset": 0, 00:16:30.426 "data_size": 63488 00:16:30.426 }, 00:16:30.426 { 00:16:30.426 "name": "BaseBdev2", 00:16:30.426 "uuid": "4e395552-c593-50fb-9b0a-d38349e487f6", 00:16:30.426 "is_configured": true, 00:16:30.426 "data_offset": 2048, 00:16:30.426 "data_size": 63488 00:16:30.426 }, 00:16:30.426 { 00:16:30.426 "name": "BaseBdev3", 00:16:30.426 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:30.426 "is_configured": true, 00:16:30.426 "data_offset": 2048, 00:16:30.426 "data_size": 63488 00:16:30.426 }, 00:16:30.426 { 00:16:30.426 "name": "BaseBdev4", 00:16:30.426 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:30.426 "is_configured": true, 00:16:30.426 "data_offset": 2048, 00:16:30.426 "data_size": 63488 00:16:30.426 } 00:16:30.426 ] 00:16:30.426 }' 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.426 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.995 13:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.995 13:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.995 13:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.995 "name": "raid_bdev1", 00:16:30.995 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:30.995 "strip_size_kb": 0, 00:16:30.995 "state": "online", 00:16:30.995 "raid_level": "raid1", 00:16:30.995 "superblock": true, 00:16:30.995 "num_base_bdevs": 4, 00:16:30.995 "num_base_bdevs_discovered": 3, 00:16:30.995 "num_base_bdevs_operational": 3, 00:16:30.995 "base_bdevs_list": [ 00:16:30.995 { 00:16:30.995 "name": null, 00:16:30.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.995 "is_configured": false, 00:16:30.995 "data_offset": 0, 00:16:30.995 "data_size": 63488 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "name": "BaseBdev2", 00:16:30.995 "uuid": "4e395552-c593-50fb-9b0a-d38349e487f6", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 2048, 00:16:30.995 "data_size": 63488 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "name": "BaseBdev3", 00:16:30.995 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 2048, 00:16:30.996 "data_size": 63488 00:16:30.996 }, 00:16:30.996 { 00:16:30.996 "name": "BaseBdev4", 00:16:30.996 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:30.996 "is_configured": true, 00:16:30.996 "data_offset": 2048, 00:16:30.996 "data_size": 63488 00:16:30.996 } 00:16:30.996 ] 00:16:30.996 }' 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.996 [2024-10-01 13:50:41.149566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.996 [2024-10-01 13:50:41.165602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.996 13:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:30.996 [2024-10-01 13:50:41.168068] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.392 "name": "raid_bdev1", 00:16:32.392 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:32.392 "strip_size_kb": 0, 00:16:32.392 "state": "online", 00:16:32.392 "raid_level": "raid1", 00:16:32.392 "superblock": true, 00:16:32.392 "num_base_bdevs": 4, 00:16:32.392 "num_base_bdevs_discovered": 4, 00:16:32.392 "num_base_bdevs_operational": 4, 00:16:32.392 "process": { 00:16:32.392 "type": "rebuild", 00:16:32.392 "target": "spare", 00:16:32.392 "progress": { 00:16:32.392 "blocks": 20480, 00:16:32.392 "percent": 32 00:16:32.392 } 00:16:32.392 }, 00:16:32.392 "base_bdevs_list": [ 00:16:32.392 { 00:16:32.392 "name": "spare", 00:16:32.392 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:32.392 "is_configured": true, 00:16:32.392 "data_offset": 2048, 00:16:32.392 "data_size": 63488 00:16:32.392 }, 00:16:32.392 { 00:16:32.392 "name": "BaseBdev2", 00:16:32.392 "uuid": "4e395552-c593-50fb-9b0a-d38349e487f6", 00:16:32.392 "is_configured": true, 00:16:32.392 "data_offset": 2048, 00:16:32.392 "data_size": 63488 00:16:32.392 }, 00:16:32.392 { 00:16:32.392 "name": "BaseBdev3", 00:16:32.392 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:32.392 "is_configured": true, 00:16:32.392 "data_offset": 2048, 00:16:32.392 "data_size": 63488 00:16:32.392 }, 00:16:32.392 { 00:16:32.392 "name": "BaseBdev4", 00:16:32.392 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:32.392 "is_configured": true, 00:16:32.392 "data_offset": 2048, 00:16:32.392 "data_size": 63488 00:16:32.392 } 00:16:32.392 ] 00:16:32.392 }' 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.392 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:32.393 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.393 [2024-10-01 13:50:42.327915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.393 [2024-10-01 13:50:42.474200] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.393 "name": "raid_bdev1", 00:16:32.393 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:32.393 "strip_size_kb": 0, 00:16:32.393 "state": "online", 00:16:32.393 "raid_level": "raid1", 00:16:32.393 "superblock": true, 00:16:32.393 "num_base_bdevs": 4, 00:16:32.393 "num_base_bdevs_discovered": 3, 00:16:32.393 "num_base_bdevs_operational": 3, 00:16:32.393 "process": { 00:16:32.393 "type": "rebuild", 00:16:32.393 "target": "spare", 00:16:32.393 "progress": { 00:16:32.393 "blocks": 24576, 00:16:32.393 "percent": 38 00:16:32.393 } 00:16:32.393 }, 00:16:32.393 "base_bdevs_list": [ 00:16:32.393 { 00:16:32.393 "name": "spare", 00:16:32.393 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:32.393 "is_configured": true, 00:16:32.393 "data_offset": 2048, 00:16:32.393 "data_size": 63488 00:16:32.393 }, 00:16:32.393 { 00:16:32.393 "name": null, 00:16:32.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.393 "is_configured": false, 00:16:32.393 "data_offset": 0, 00:16:32.393 "data_size": 63488 00:16:32.393 }, 00:16:32.393 { 00:16:32.393 "name": "BaseBdev3", 00:16:32.393 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:32.393 "is_configured": true, 00:16:32.393 "data_offset": 2048, 00:16:32.393 "data_size": 63488 00:16:32.393 }, 00:16:32.393 { 00:16:32.393 "name": "BaseBdev4", 00:16:32.393 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:32.393 "is_configured": true, 00:16:32.393 "data_offset": 2048, 00:16:32.393 "data_size": 63488 00:16:32.393 } 00:16:32.393 ] 00:16:32.393 }' 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.393 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=477 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.652 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.652 "name": "raid_bdev1", 00:16:32.652 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:32.652 "strip_size_kb": 0, 00:16:32.652 "state": "online", 00:16:32.652 "raid_level": "raid1", 00:16:32.652 "superblock": true, 00:16:32.652 "num_base_bdevs": 4, 00:16:32.652 "num_base_bdevs_discovered": 3, 00:16:32.652 "num_base_bdevs_operational": 3, 00:16:32.652 "process": { 00:16:32.652 "type": "rebuild", 00:16:32.652 "target": "spare", 00:16:32.652 "progress": { 00:16:32.652 "blocks": 26624, 00:16:32.652 "percent": 41 00:16:32.652 } 00:16:32.652 }, 00:16:32.652 "base_bdevs_list": [ 00:16:32.652 { 00:16:32.652 "name": "spare", 00:16:32.652 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:32.652 "is_configured": true, 00:16:32.652 "data_offset": 2048, 00:16:32.652 "data_size": 63488 00:16:32.653 }, 00:16:32.653 { 00:16:32.653 "name": null, 00:16:32.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.653 "is_configured": false, 00:16:32.653 "data_offset": 0, 00:16:32.653 "data_size": 63488 00:16:32.653 }, 00:16:32.653 { 00:16:32.653 "name": "BaseBdev3", 00:16:32.653 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:32.653 "is_configured": true, 00:16:32.653 "data_offset": 2048, 00:16:32.653 "data_size": 63488 00:16:32.653 }, 00:16:32.653 { 00:16:32.653 "name": "BaseBdev4", 00:16:32.653 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:32.653 "is_configured": true, 00:16:32.653 "data_offset": 2048, 00:16:32.653 "data_size": 63488 00:16:32.653 } 00:16:32.653 ] 00:16:32.653 }' 00:16:32.653 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.653 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.653 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.653 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.653 13:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.587 13:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.846 13:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.846 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.846 "name": "raid_bdev1", 00:16:33.846 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:33.846 "strip_size_kb": 0, 00:16:33.846 "state": "online", 00:16:33.846 "raid_level": "raid1", 00:16:33.846 "superblock": true, 00:16:33.846 "num_base_bdevs": 4, 00:16:33.846 "num_base_bdevs_discovered": 3, 00:16:33.846 "num_base_bdevs_operational": 3, 00:16:33.846 "process": { 00:16:33.846 "type": "rebuild", 00:16:33.846 "target": "spare", 00:16:33.846 "progress": { 00:16:33.846 "blocks": 49152, 00:16:33.846 "percent": 77 00:16:33.846 } 00:16:33.846 }, 00:16:33.846 "base_bdevs_list": [ 00:16:33.846 { 00:16:33.846 "name": "spare", 00:16:33.846 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:33.846 "is_configured": true, 00:16:33.846 "data_offset": 2048, 00:16:33.846 "data_size": 63488 00:16:33.846 }, 00:16:33.846 { 00:16:33.846 "name": null, 00:16:33.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.846 "is_configured": false, 00:16:33.846 "data_offset": 0, 00:16:33.846 "data_size": 63488 00:16:33.846 }, 00:16:33.846 { 00:16:33.846 "name": "BaseBdev3", 00:16:33.847 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:33.847 "is_configured": true, 00:16:33.847 "data_offset": 2048, 00:16:33.847 "data_size": 63488 00:16:33.847 }, 00:16:33.847 { 00:16:33.847 "name": "BaseBdev4", 00:16:33.847 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:33.847 "is_configured": true, 00:16:33.847 "data_offset": 2048, 00:16:33.847 "data_size": 63488 00:16:33.847 } 00:16:33.847 ] 00:16:33.847 }' 00:16:33.847 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.847 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.847 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.847 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.847 13:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.415 [2024-10-01 13:50:44.384088] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:34.415 [2024-10-01 13:50:44.384192] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:34.415 [2024-10-01 13:50:44.384337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.983 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.983 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.983 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.984 "name": "raid_bdev1", 00:16:34.984 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:34.984 "strip_size_kb": 0, 00:16:34.984 "state": "online", 00:16:34.984 "raid_level": "raid1", 00:16:34.984 "superblock": true, 00:16:34.984 "num_base_bdevs": 4, 00:16:34.984 "num_base_bdevs_discovered": 3, 00:16:34.984 "num_base_bdevs_operational": 3, 00:16:34.984 "base_bdevs_list": [ 00:16:34.984 { 00:16:34.984 "name": "spare", 00:16:34.984 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:34.984 "is_configured": true, 00:16:34.984 "data_offset": 2048, 00:16:34.984 "data_size": 63488 00:16:34.984 }, 00:16:34.984 { 00:16:34.984 "name": null, 00:16:34.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.984 "is_configured": false, 00:16:34.984 "data_offset": 0, 00:16:34.984 "data_size": 63488 00:16:34.984 }, 00:16:34.984 { 00:16:34.984 "name": "BaseBdev3", 00:16:34.984 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:34.984 "is_configured": true, 00:16:34.984 "data_offset": 2048, 00:16:34.984 "data_size": 63488 00:16:34.984 }, 00:16:34.984 { 00:16:34.984 "name": "BaseBdev4", 00:16:34.984 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:34.984 "is_configured": true, 00:16:34.984 "data_offset": 2048, 00:16:34.984 "data_size": 63488 00:16:34.984 } 00:16:34.984 ] 00:16:34.984 }' 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:34.984 13:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.984 "name": "raid_bdev1", 00:16:34.984 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:34.984 "strip_size_kb": 0, 00:16:34.984 "state": "online", 00:16:34.984 "raid_level": "raid1", 00:16:34.984 "superblock": true, 00:16:34.984 "num_base_bdevs": 4, 00:16:34.984 "num_base_bdevs_discovered": 3, 00:16:34.984 "num_base_bdevs_operational": 3, 00:16:34.984 "base_bdevs_list": [ 00:16:34.984 { 00:16:34.984 "name": "spare", 00:16:34.984 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:34.984 "is_configured": true, 00:16:34.984 "data_offset": 2048, 00:16:34.984 "data_size": 63488 00:16:34.984 }, 00:16:34.984 { 00:16:34.984 "name": null, 00:16:34.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.984 "is_configured": false, 00:16:34.984 "data_offset": 0, 00:16:34.984 "data_size": 63488 00:16:34.984 }, 00:16:34.984 { 00:16:34.984 "name": "BaseBdev3", 00:16:34.984 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:34.984 "is_configured": true, 00:16:34.984 "data_offset": 2048, 00:16:34.984 "data_size": 63488 00:16:34.984 }, 00:16:34.984 { 00:16:34.984 "name": "BaseBdev4", 00:16:34.984 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:34.984 "is_configured": true, 00:16:34.984 "data_offset": 2048, 00:16:34.984 "data_size": 63488 00:16:34.984 } 00:16:34.984 ] 00:16:34.984 }' 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.984 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.244 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.244 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.244 "name": "raid_bdev1", 00:16:35.244 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:35.244 "strip_size_kb": 0, 00:16:35.244 "state": "online", 00:16:35.244 "raid_level": "raid1", 00:16:35.244 "superblock": true, 00:16:35.244 "num_base_bdevs": 4, 00:16:35.244 "num_base_bdevs_discovered": 3, 00:16:35.244 "num_base_bdevs_operational": 3, 00:16:35.244 "base_bdevs_list": [ 00:16:35.244 { 00:16:35.244 "name": "spare", 00:16:35.244 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:35.244 "is_configured": true, 00:16:35.244 "data_offset": 2048, 00:16:35.244 "data_size": 63488 00:16:35.244 }, 00:16:35.244 { 00:16:35.244 "name": null, 00:16:35.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.244 "is_configured": false, 00:16:35.244 "data_offset": 0, 00:16:35.244 "data_size": 63488 00:16:35.244 }, 00:16:35.244 { 00:16:35.244 "name": "BaseBdev3", 00:16:35.244 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:35.244 "is_configured": true, 00:16:35.244 "data_offset": 2048, 00:16:35.244 "data_size": 63488 00:16:35.244 }, 00:16:35.244 { 00:16:35.244 "name": "BaseBdev4", 00:16:35.244 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:35.244 "is_configured": true, 00:16:35.244 "data_offset": 2048, 00:16:35.244 "data_size": 63488 00:16:35.244 } 00:16:35.244 ] 00:16:35.244 }' 00:16:35.244 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.244 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.503 [2024-10-01 13:50:45.544616] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.503 [2024-10-01 13:50:45.544663] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.503 [2024-10-01 13:50:45.544756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.503 [2024-10-01 13:50:45.544846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.503 [2024-10-01 13:50:45.544859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.503 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.504 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:35.763 /dev/nbd0 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.763 1+0 records in 00:16:35.763 1+0 records out 00:16:35.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380824 s, 10.8 MB/s 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.763 13:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:36.022 /dev/nbd1 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.022 1+0 records in 00:16:36.022 1+0 records out 00:16:36.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433119 s, 9.5 MB/s 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.022 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:36.282 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:36.282 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.282 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.282 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:36.282 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:36.282 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:36.282 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:36.540 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.798 [2024-10-01 13:50:46.857272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:36.798 [2024-10-01 13:50:46.857343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.798 [2024-10-01 13:50:46.857371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:36.798 [2024-10-01 13:50:46.857383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.798 [2024-10-01 13:50:46.859921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.798 [2024-10-01 13:50:46.859995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:36.798 [2024-10-01 13:50:46.860124] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:36.798 [2024-10-01 13:50:46.860191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.798 [2024-10-01 13:50:46.860332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.798 [2024-10-01 13:50:46.860452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:36.798 spare 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.798 [2024-10-01 13:50:46.960401] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:36.798 [2024-10-01 13:50:46.960469] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:36.798 [2024-10-01 13:50:46.960872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:36.798 [2024-10-01 13:50:46.961083] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:36.798 [2024-10-01 13:50:46.961105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:36.798 [2024-10-01 13:50:46.961313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.798 13:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.058 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.058 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.058 "name": "raid_bdev1", 00:16:37.058 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:37.058 "strip_size_kb": 0, 00:16:37.058 "state": "online", 00:16:37.058 "raid_level": "raid1", 00:16:37.058 "superblock": true, 00:16:37.058 "num_base_bdevs": 4, 00:16:37.058 "num_base_bdevs_discovered": 3, 00:16:37.058 "num_base_bdevs_operational": 3, 00:16:37.058 "base_bdevs_list": [ 00:16:37.058 { 00:16:37.058 "name": "spare", 00:16:37.058 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:37.058 "is_configured": true, 00:16:37.058 "data_offset": 2048, 00:16:37.058 "data_size": 63488 00:16:37.058 }, 00:16:37.058 { 00:16:37.058 "name": null, 00:16:37.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.058 "is_configured": false, 00:16:37.058 "data_offset": 2048, 00:16:37.058 "data_size": 63488 00:16:37.058 }, 00:16:37.058 { 00:16:37.058 "name": "BaseBdev3", 00:16:37.058 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:37.058 "is_configured": true, 00:16:37.058 "data_offset": 2048, 00:16:37.058 "data_size": 63488 00:16:37.058 }, 00:16:37.058 { 00:16:37.058 "name": "BaseBdev4", 00:16:37.058 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:37.058 "is_configured": true, 00:16:37.058 "data_offset": 2048, 00:16:37.058 "data_size": 63488 00:16:37.058 } 00:16:37.058 ] 00:16:37.058 }' 00:16:37.058 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.058 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.317 "name": "raid_bdev1", 00:16:37.317 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:37.317 "strip_size_kb": 0, 00:16:37.317 "state": "online", 00:16:37.317 "raid_level": "raid1", 00:16:37.317 "superblock": true, 00:16:37.317 "num_base_bdevs": 4, 00:16:37.317 "num_base_bdevs_discovered": 3, 00:16:37.317 "num_base_bdevs_operational": 3, 00:16:37.317 "base_bdevs_list": [ 00:16:37.317 { 00:16:37.317 "name": "spare", 00:16:37.317 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:37.317 "is_configured": true, 00:16:37.317 "data_offset": 2048, 00:16:37.317 "data_size": 63488 00:16:37.317 }, 00:16:37.317 { 00:16:37.317 "name": null, 00:16:37.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.317 "is_configured": false, 00:16:37.317 "data_offset": 2048, 00:16:37.317 "data_size": 63488 00:16:37.317 }, 00:16:37.317 { 00:16:37.317 "name": "BaseBdev3", 00:16:37.317 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:37.317 "is_configured": true, 00:16:37.317 "data_offset": 2048, 00:16:37.317 "data_size": 63488 00:16:37.317 }, 00:16:37.317 { 00:16:37.317 "name": "BaseBdev4", 00:16:37.317 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:37.317 "is_configured": true, 00:16:37.317 "data_offset": 2048, 00:16:37.317 "data_size": 63488 00:16:37.317 } 00:16:37.317 ] 00:16:37.317 }' 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.317 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.576 [2024-10-01 13:50:47.596466] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.576 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.577 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.577 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.577 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.577 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.577 "name": "raid_bdev1", 00:16:37.577 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:37.577 "strip_size_kb": 0, 00:16:37.577 "state": "online", 00:16:37.577 "raid_level": "raid1", 00:16:37.577 "superblock": true, 00:16:37.577 "num_base_bdevs": 4, 00:16:37.577 "num_base_bdevs_discovered": 2, 00:16:37.577 "num_base_bdevs_operational": 2, 00:16:37.577 "base_bdevs_list": [ 00:16:37.577 { 00:16:37.577 "name": null, 00:16:37.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.577 "is_configured": false, 00:16:37.577 "data_offset": 0, 00:16:37.577 "data_size": 63488 00:16:37.577 }, 00:16:37.577 { 00:16:37.577 "name": null, 00:16:37.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.577 "is_configured": false, 00:16:37.577 "data_offset": 2048, 00:16:37.577 "data_size": 63488 00:16:37.577 }, 00:16:37.577 { 00:16:37.577 "name": "BaseBdev3", 00:16:37.577 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:37.577 "is_configured": true, 00:16:37.577 "data_offset": 2048, 00:16:37.577 "data_size": 63488 00:16:37.577 }, 00:16:37.577 { 00:16:37.577 "name": "BaseBdev4", 00:16:37.577 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:37.577 "is_configured": true, 00:16:37.577 "data_offset": 2048, 00:16:37.577 "data_size": 63488 00:16:37.577 } 00:16:37.577 ] 00:16:37.577 }' 00:16:37.577 13:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.577 13:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.144 13:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:38.144 13:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.144 13:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.144 [2024-10-01 13:50:48.043821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.144 [2024-10-01 13:50:48.044036] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:38.144 [2024-10-01 13:50:48.044067] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:38.144 [2024-10-01 13:50:48.044110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.144 [2024-10-01 13:50:48.059065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:38.144 13:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.144 13:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:38.144 [2024-10-01 13:50:48.061402] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.080 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.080 "name": "raid_bdev1", 00:16:39.080 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:39.080 "strip_size_kb": 0, 00:16:39.080 "state": "online", 00:16:39.080 "raid_level": "raid1", 00:16:39.080 "superblock": true, 00:16:39.080 "num_base_bdevs": 4, 00:16:39.080 "num_base_bdevs_discovered": 3, 00:16:39.080 "num_base_bdevs_operational": 3, 00:16:39.080 "process": { 00:16:39.080 "type": "rebuild", 00:16:39.080 "target": "spare", 00:16:39.080 "progress": { 00:16:39.080 "blocks": 20480, 00:16:39.080 "percent": 32 00:16:39.080 } 00:16:39.080 }, 00:16:39.080 "base_bdevs_list": [ 00:16:39.080 { 00:16:39.080 "name": "spare", 00:16:39.080 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:39.080 "is_configured": true, 00:16:39.080 "data_offset": 2048, 00:16:39.080 "data_size": 63488 00:16:39.080 }, 00:16:39.080 { 00:16:39.080 "name": null, 00:16:39.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.080 "is_configured": false, 00:16:39.080 "data_offset": 2048, 00:16:39.080 "data_size": 63488 00:16:39.080 }, 00:16:39.080 { 00:16:39.080 "name": "BaseBdev3", 00:16:39.081 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:39.081 "is_configured": true, 00:16:39.081 "data_offset": 2048, 00:16:39.081 "data_size": 63488 00:16:39.081 }, 00:16:39.081 { 00:16:39.081 "name": "BaseBdev4", 00:16:39.081 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:39.081 "is_configured": true, 00:16:39.081 "data_offset": 2048, 00:16:39.081 "data_size": 63488 00:16:39.081 } 00:16:39.081 ] 00:16:39.081 }' 00:16:39.081 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.081 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.081 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.081 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.081 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:39.081 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.081 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.081 [2024-10-01 13:50:49.213360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.081 [2024-10-01 13:50:49.267638] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:39.081 [2024-10-01 13:50:49.267726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.081 [2024-10-01 13:50:49.267749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.081 [2024-10-01 13:50:49.267759] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.340 "name": "raid_bdev1", 00:16:39.340 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:39.340 "strip_size_kb": 0, 00:16:39.340 "state": "online", 00:16:39.340 "raid_level": "raid1", 00:16:39.340 "superblock": true, 00:16:39.340 "num_base_bdevs": 4, 00:16:39.340 "num_base_bdevs_discovered": 2, 00:16:39.340 "num_base_bdevs_operational": 2, 00:16:39.340 "base_bdevs_list": [ 00:16:39.340 { 00:16:39.340 "name": null, 00:16:39.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.340 "is_configured": false, 00:16:39.340 "data_offset": 0, 00:16:39.340 "data_size": 63488 00:16:39.340 }, 00:16:39.340 { 00:16:39.340 "name": null, 00:16:39.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.340 "is_configured": false, 00:16:39.340 "data_offset": 2048, 00:16:39.340 "data_size": 63488 00:16:39.340 }, 00:16:39.340 { 00:16:39.340 "name": "BaseBdev3", 00:16:39.340 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:39.340 "is_configured": true, 00:16:39.340 "data_offset": 2048, 00:16:39.340 "data_size": 63488 00:16:39.340 }, 00:16:39.340 { 00:16:39.340 "name": "BaseBdev4", 00:16:39.340 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:39.340 "is_configured": true, 00:16:39.340 "data_offset": 2048, 00:16:39.340 "data_size": 63488 00:16:39.340 } 00:16:39.340 ] 00:16:39.340 }' 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.340 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:39.599 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.600 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 [2024-10-01 13:50:49.707892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:39.600 [2024-10-01 13:50:49.707966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.600 [2024-10-01 13:50:49.707998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:39.600 [2024-10-01 13:50:49.708011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.600 [2024-10-01 13:50:49.708554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.600 [2024-10-01 13:50:49.708587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:39.600 [2024-10-01 13:50:49.708690] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:39.600 [2024-10-01 13:50:49.708705] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:39.600 [2024-10-01 13:50:49.708722] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:39.600 [2024-10-01 13:50:49.708754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.600 [2024-10-01 13:50:49.723630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:39.600 spare 00:16:39.600 13:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.600 13:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:39.600 [2024-10-01 13:50:49.725806] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.978 "name": "raid_bdev1", 00:16:40.978 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:40.978 "strip_size_kb": 0, 00:16:40.978 "state": "online", 00:16:40.978 "raid_level": "raid1", 00:16:40.978 "superblock": true, 00:16:40.978 "num_base_bdevs": 4, 00:16:40.978 "num_base_bdevs_discovered": 3, 00:16:40.978 "num_base_bdevs_operational": 3, 00:16:40.978 "process": { 00:16:40.978 "type": "rebuild", 00:16:40.978 "target": "spare", 00:16:40.978 "progress": { 00:16:40.978 "blocks": 20480, 00:16:40.978 "percent": 32 00:16:40.978 } 00:16:40.978 }, 00:16:40.978 "base_bdevs_list": [ 00:16:40.978 { 00:16:40.978 "name": "spare", 00:16:40.978 "uuid": "abe0c659-70d7-5817-b9f2-64b2320cccea", 00:16:40.978 "is_configured": true, 00:16:40.978 "data_offset": 2048, 00:16:40.978 "data_size": 63488 00:16:40.978 }, 00:16:40.978 { 00:16:40.978 "name": null, 00:16:40.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.978 "is_configured": false, 00:16:40.978 "data_offset": 2048, 00:16:40.978 "data_size": 63488 00:16:40.978 }, 00:16:40.978 { 00:16:40.978 "name": "BaseBdev3", 00:16:40.978 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:40.978 "is_configured": true, 00:16:40.978 "data_offset": 2048, 00:16:40.978 "data_size": 63488 00:16:40.978 }, 00:16:40.978 { 00:16:40.978 "name": "BaseBdev4", 00:16:40.978 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:40.978 "is_configured": true, 00:16:40.978 "data_offset": 2048, 00:16:40.978 "data_size": 63488 00:16:40.978 } 00:16:40.978 ] 00:16:40.978 }' 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.978 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.979 [2024-10-01 13:50:50.870637] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.979 [2024-10-01 13:50:50.931657] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.979 [2024-10-01 13:50:50.931745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.979 [2024-10-01 13:50:50.931764] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.979 [2024-10-01 13:50:50.931776] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.979 "name": "raid_bdev1", 00:16:40.979 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:40.979 "strip_size_kb": 0, 00:16:40.979 "state": "online", 00:16:40.979 "raid_level": "raid1", 00:16:40.979 "superblock": true, 00:16:40.979 "num_base_bdevs": 4, 00:16:40.979 "num_base_bdevs_discovered": 2, 00:16:40.979 "num_base_bdevs_operational": 2, 00:16:40.979 "base_bdevs_list": [ 00:16:40.979 { 00:16:40.979 "name": null, 00:16:40.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.979 "is_configured": false, 00:16:40.979 "data_offset": 0, 00:16:40.979 "data_size": 63488 00:16:40.979 }, 00:16:40.979 { 00:16:40.979 "name": null, 00:16:40.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.979 "is_configured": false, 00:16:40.979 "data_offset": 2048, 00:16:40.979 "data_size": 63488 00:16:40.979 }, 00:16:40.979 { 00:16:40.979 "name": "BaseBdev3", 00:16:40.979 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:40.979 "is_configured": true, 00:16:40.979 "data_offset": 2048, 00:16:40.979 "data_size": 63488 00:16:40.979 }, 00:16:40.979 { 00:16:40.979 "name": "BaseBdev4", 00:16:40.979 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:40.979 "is_configured": true, 00:16:40.979 "data_offset": 2048, 00:16:40.979 "data_size": 63488 00:16:40.979 } 00:16:40.979 ] 00:16:40.979 }' 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.979 13:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.238 "name": "raid_bdev1", 00:16:41.238 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:41.238 "strip_size_kb": 0, 00:16:41.238 "state": "online", 00:16:41.238 "raid_level": "raid1", 00:16:41.238 "superblock": true, 00:16:41.238 "num_base_bdevs": 4, 00:16:41.238 "num_base_bdevs_discovered": 2, 00:16:41.238 "num_base_bdevs_operational": 2, 00:16:41.238 "base_bdevs_list": [ 00:16:41.238 { 00:16:41.238 "name": null, 00:16:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.238 "is_configured": false, 00:16:41.238 "data_offset": 0, 00:16:41.238 "data_size": 63488 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "name": null, 00:16:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.238 "is_configured": false, 00:16:41.238 "data_offset": 2048, 00:16:41.238 "data_size": 63488 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "name": "BaseBdev3", 00:16:41.238 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:41.238 "is_configured": true, 00:16:41.238 "data_offset": 2048, 00:16:41.238 "data_size": 63488 00:16:41.238 }, 00:16:41.238 { 00:16:41.238 "name": "BaseBdev4", 00:16:41.238 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:41.238 "is_configured": true, 00:16:41.238 "data_offset": 2048, 00:16:41.238 "data_size": 63488 00:16:41.238 } 00:16:41.238 ] 00:16:41.238 }' 00:16:41.238 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.497 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.497 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.497 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.497 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:41.497 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.497 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.497 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.498 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:41.498 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.498 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.498 [2024-10-01 13:50:51.515611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:41.498 [2024-10-01 13:50:51.515680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.498 [2024-10-01 13:50:51.515703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:41.498 [2024-10-01 13:50:51.515718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.498 [2024-10-01 13:50:51.516196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.498 [2024-10-01 13:50:51.516230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:41.498 [2024-10-01 13:50:51.516349] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:41.498 [2024-10-01 13:50:51.516371] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:41.498 [2024-10-01 13:50:51.516382] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:41.498 [2024-10-01 13:50:51.516399] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:41.498 BaseBdev1 00:16:41.498 13:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.498 13:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.436 "name": "raid_bdev1", 00:16:42.436 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:42.436 "strip_size_kb": 0, 00:16:42.436 "state": "online", 00:16:42.436 "raid_level": "raid1", 00:16:42.436 "superblock": true, 00:16:42.436 "num_base_bdevs": 4, 00:16:42.436 "num_base_bdevs_discovered": 2, 00:16:42.436 "num_base_bdevs_operational": 2, 00:16:42.436 "base_bdevs_list": [ 00:16:42.436 { 00:16:42.436 "name": null, 00:16:42.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.436 "is_configured": false, 00:16:42.436 "data_offset": 0, 00:16:42.436 "data_size": 63488 00:16:42.436 }, 00:16:42.436 { 00:16:42.436 "name": null, 00:16:42.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.436 "is_configured": false, 00:16:42.436 "data_offset": 2048, 00:16:42.436 "data_size": 63488 00:16:42.436 }, 00:16:42.436 { 00:16:42.436 "name": "BaseBdev3", 00:16:42.436 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:42.436 "is_configured": true, 00:16:42.436 "data_offset": 2048, 00:16:42.436 "data_size": 63488 00:16:42.436 }, 00:16:42.436 { 00:16:42.436 "name": "BaseBdev4", 00:16:42.436 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:42.436 "is_configured": true, 00:16:42.436 "data_offset": 2048, 00:16:42.436 "data_size": 63488 00:16:42.436 } 00:16:42.436 ] 00:16:42.436 }' 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.436 13:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.004 "name": "raid_bdev1", 00:16:43.004 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:43.004 "strip_size_kb": 0, 00:16:43.004 "state": "online", 00:16:43.004 "raid_level": "raid1", 00:16:43.004 "superblock": true, 00:16:43.004 "num_base_bdevs": 4, 00:16:43.004 "num_base_bdevs_discovered": 2, 00:16:43.004 "num_base_bdevs_operational": 2, 00:16:43.004 "base_bdevs_list": [ 00:16:43.004 { 00:16:43.004 "name": null, 00:16:43.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.004 "is_configured": false, 00:16:43.004 "data_offset": 0, 00:16:43.004 "data_size": 63488 00:16:43.004 }, 00:16:43.004 { 00:16:43.004 "name": null, 00:16:43.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.004 "is_configured": false, 00:16:43.004 "data_offset": 2048, 00:16:43.004 "data_size": 63488 00:16:43.004 }, 00:16:43.004 { 00:16:43.004 "name": "BaseBdev3", 00:16:43.004 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:43.004 "is_configured": true, 00:16:43.004 "data_offset": 2048, 00:16:43.004 "data_size": 63488 00:16:43.004 }, 00:16:43.004 { 00:16:43.004 "name": "BaseBdev4", 00:16:43.004 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:43.004 "is_configured": true, 00:16:43.004 "data_offset": 2048, 00:16:43.004 "data_size": 63488 00:16:43.004 } 00:16:43.004 ] 00:16:43.004 }' 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.004 13:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.004 13:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.004 13:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:43.004 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:43.004 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:43.004 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.005 [2024-10-01 13:50:53.043645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.005 [2024-10-01 13:50:53.043848] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:43.005 [2024-10-01 13:50:53.043868] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:43.005 request: 00:16:43.005 { 00:16:43.005 "base_bdev": "BaseBdev1", 00:16:43.005 "raid_bdev": "raid_bdev1", 00:16:43.005 "method": "bdev_raid_add_base_bdev", 00:16:43.005 "req_id": 1 00:16:43.005 } 00:16:43.005 Got JSON-RPC error response 00:16:43.005 response: 00:16:43.005 { 00:16:43.005 "code": -22, 00:16:43.005 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:43.005 } 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:43.005 13:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.943 "name": "raid_bdev1", 00:16:43.943 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:43.943 "strip_size_kb": 0, 00:16:43.943 "state": "online", 00:16:43.943 "raid_level": "raid1", 00:16:43.943 "superblock": true, 00:16:43.943 "num_base_bdevs": 4, 00:16:43.943 "num_base_bdevs_discovered": 2, 00:16:43.943 "num_base_bdevs_operational": 2, 00:16:43.943 "base_bdevs_list": [ 00:16:43.943 { 00:16:43.943 "name": null, 00:16:43.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.943 "is_configured": false, 00:16:43.943 "data_offset": 0, 00:16:43.943 "data_size": 63488 00:16:43.943 }, 00:16:43.943 { 00:16:43.943 "name": null, 00:16:43.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.943 "is_configured": false, 00:16:43.943 "data_offset": 2048, 00:16:43.943 "data_size": 63488 00:16:43.943 }, 00:16:43.943 { 00:16:43.943 "name": "BaseBdev3", 00:16:43.943 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:43.943 "is_configured": true, 00:16:43.943 "data_offset": 2048, 00:16:43.943 "data_size": 63488 00:16:43.943 }, 00:16:43.943 { 00:16:43.943 "name": "BaseBdev4", 00:16:43.943 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:43.943 "is_configured": true, 00:16:43.943 "data_offset": 2048, 00:16:43.943 "data_size": 63488 00:16:43.943 } 00:16:43.943 ] 00:16:43.943 }' 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.943 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.521 "name": "raid_bdev1", 00:16:44.521 "uuid": "5f9fe209-846a-408d-896a-1c267f30f465", 00:16:44.521 "strip_size_kb": 0, 00:16:44.521 "state": "online", 00:16:44.521 "raid_level": "raid1", 00:16:44.521 "superblock": true, 00:16:44.521 "num_base_bdevs": 4, 00:16:44.521 "num_base_bdevs_discovered": 2, 00:16:44.521 "num_base_bdevs_operational": 2, 00:16:44.521 "base_bdevs_list": [ 00:16:44.521 { 00:16:44.521 "name": null, 00:16:44.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.521 "is_configured": false, 00:16:44.521 "data_offset": 0, 00:16:44.521 "data_size": 63488 00:16:44.521 }, 00:16:44.521 { 00:16:44.521 "name": null, 00:16:44.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.521 "is_configured": false, 00:16:44.521 "data_offset": 2048, 00:16:44.521 "data_size": 63488 00:16:44.521 }, 00:16:44.521 { 00:16:44.521 "name": "BaseBdev3", 00:16:44.521 "uuid": "b3ec95a9-7592-5dd4-a767-cea00ad0deac", 00:16:44.521 "is_configured": true, 00:16:44.521 "data_offset": 2048, 00:16:44.521 "data_size": 63488 00:16:44.521 }, 00:16:44.521 { 00:16:44.521 "name": "BaseBdev4", 00:16:44.521 "uuid": "f05b4ba9-a14d-5872-8f18-2821495cde9d", 00:16:44.521 "is_configured": true, 00:16:44.521 "data_offset": 2048, 00:16:44.521 "data_size": 63488 00:16:44.521 } 00:16:44.521 ] 00:16:44.521 }' 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78000 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78000 ']' 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78000 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78000 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.521 killing process with pid 78000 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78000' 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78000 00:16:44.521 Received shutdown signal, test time was about 60.000000 seconds 00:16:44.521 00:16:44.521 Latency(us) 00:16:44.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.521 =================================================================================================================== 00:16:44.521 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:44.521 [2024-10-01 13:50:54.664242] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.521 13:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78000 00:16:44.521 [2024-10-01 13:50:54.664376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.521 [2024-10-01 13:50:54.664462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.521 [2024-10-01 13:50:54.664475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:45.090 [2024-10-01 13:50:55.175539] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.523 13:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:46.523 00:16:46.523 real 0m26.223s 00:16:46.523 user 0m30.584s 00:16:46.523 sys 0m4.636s 00:16:46.523 13:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.523 13:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.523 ************************************ 00:16:46.523 END TEST raid_rebuild_test_sb 00:16:46.523 ************************************ 00:16:46.523 13:50:56 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:46.523 13:50:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:46.523 13:50:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.523 13:50:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.523 ************************************ 00:16:46.523 START TEST raid_rebuild_test_io 00:16:46.523 ************************************ 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78759 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78759 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78759 ']' 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.524 13:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.524 [2024-10-01 13:50:56.698598] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:16:46.524 [2024-10-01 13:50:56.698728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78759 ] 00:16:46.524 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:46.524 Zero copy mechanism will not be used. 00:16:46.783 [2024-10-01 13:50:56.876584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.042 [2024-10-01 13:50:57.099367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.301 [2024-10-01 13:50:57.307415] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.301 [2024-10-01 13:50:57.307465] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.559 BaseBdev1_malloc 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.559 [2024-10-01 13:50:57.608545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.559 [2024-10-01 13:50:57.608628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.559 [2024-10-01 13:50:57.608652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.559 [2024-10-01 13:50:57.608671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.559 [2024-10-01 13:50:57.611221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.559 [2024-10-01 13:50:57.611266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.559 BaseBdev1 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.559 BaseBdev2_malloc 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.559 [2024-10-01 13:50:57.678565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:47.559 [2024-10-01 13:50:57.678633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.559 [2024-10-01 13:50:57.678655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.559 [2024-10-01 13:50:57.678672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.559 [2024-10-01 13:50:57.681256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.559 [2024-10-01 13:50:57.681314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.559 BaseBdev2 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.559 BaseBdev3_malloc 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.559 [2024-10-01 13:50:57.736039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:47.559 [2024-10-01 13:50:57.736101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.559 [2024-10-01 13:50:57.736123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:47.559 [2024-10-01 13:50:57.736138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.559 [2024-10-01 13:50:57.738658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.559 [2024-10-01 13:50:57.738703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:47.559 BaseBdev3 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.818 BaseBdev4_malloc 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.818 [2024-10-01 13:50:57.794171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:47.818 [2024-10-01 13:50:57.794246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.818 [2024-10-01 13:50:57.794273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:47.818 [2024-10-01 13:50:57.794289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.818 [2024-10-01 13:50:57.796890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.818 [2024-10-01 13:50:57.796940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:47.818 BaseBdev4 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.818 spare_malloc 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.818 spare_delay 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.818 [2024-10-01 13:50:57.863317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:47.818 [2024-10-01 13:50:57.863387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.818 [2024-10-01 13:50:57.863424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:47.818 [2024-10-01 13:50:57.863439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.818 [2024-10-01 13:50:57.865943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.818 [2024-10-01 13:50:57.865986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:47.818 spare 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.818 [2024-10-01 13:50:57.875348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.818 [2024-10-01 13:50:57.877540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.818 [2024-10-01 13:50:57.877613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.818 [2024-10-01 13:50:57.877667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.818 [2024-10-01 13:50:57.877750] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:47.818 [2024-10-01 13:50:57.877763] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:47.818 [2024-10-01 13:50:57.878055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:47.818 [2024-10-01 13:50:57.878240] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:47.818 [2024-10-01 13:50:57.878259] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:47.818 [2024-10-01 13:50:57.878442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.818 "name": "raid_bdev1", 00:16:47.818 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:47.818 "strip_size_kb": 0, 00:16:47.818 "state": "online", 00:16:47.818 "raid_level": "raid1", 00:16:47.818 "superblock": false, 00:16:47.818 "num_base_bdevs": 4, 00:16:47.818 "num_base_bdevs_discovered": 4, 00:16:47.818 "num_base_bdevs_operational": 4, 00:16:47.818 "base_bdevs_list": [ 00:16:47.818 { 00:16:47.818 "name": "BaseBdev1", 00:16:47.818 "uuid": "1b6e40c3-244b-51f8-8e8a-4337cced8505", 00:16:47.818 "is_configured": true, 00:16:47.818 "data_offset": 0, 00:16:47.818 "data_size": 65536 00:16:47.818 }, 00:16:47.818 { 00:16:47.818 "name": "BaseBdev2", 00:16:47.818 "uuid": "7bb3dede-d7ec-5c02-9535-5f88709d4944", 00:16:47.818 "is_configured": true, 00:16:47.818 "data_offset": 0, 00:16:47.818 "data_size": 65536 00:16:47.818 }, 00:16:47.818 { 00:16:47.818 "name": "BaseBdev3", 00:16:47.818 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:47.818 "is_configured": true, 00:16:47.818 "data_offset": 0, 00:16:47.818 "data_size": 65536 00:16:47.818 }, 00:16:47.818 { 00:16:47.818 "name": "BaseBdev4", 00:16:47.818 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:47.818 "is_configured": true, 00:16:47.818 "data_offset": 0, 00:16:47.818 "data_size": 65536 00:16:47.818 } 00:16:47.818 ] 00:16:47.818 }' 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.818 13:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.386 [2024-10-01 13:50:58.295094] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.386 [2024-10-01 13:50:58.386619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.386 "name": "raid_bdev1", 00:16:48.386 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:48.386 "strip_size_kb": 0, 00:16:48.386 "state": "online", 00:16:48.386 "raid_level": "raid1", 00:16:48.386 "superblock": false, 00:16:48.386 "num_base_bdevs": 4, 00:16:48.386 "num_base_bdevs_discovered": 3, 00:16:48.386 "num_base_bdevs_operational": 3, 00:16:48.386 "base_bdevs_list": [ 00:16:48.386 { 00:16:48.386 "name": null, 00:16:48.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.386 "is_configured": false, 00:16:48.386 "data_offset": 0, 00:16:48.386 "data_size": 65536 00:16:48.386 }, 00:16:48.386 { 00:16:48.386 "name": "BaseBdev2", 00:16:48.386 "uuid": "7bb3dede-d7ec-5c02-9535-5f88709d4944", 00:16:48.386 "is_configured": true, 00:16:48.386 "data_offset": 0, 00:16:48.386 "data_size": 65536 00:16:48.386 }, 00:16:48.386 { 00:16:48.386 "name": "BaseBdev3", 00:16:48.386 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:48.386 "is_configured": true, 00:16:48.386 "data_offset": 0, 00:16:48.386 "data_size": 65536 00:16:48.386 }, 00:16:48.386 { 00:16:48.386 "name": "BaseBdev4", 00:16:48.386 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:48.386 "is_configured": true, 00:16:48.386 "data_offset": 0, 00:16:48.386 "data_size": 65536 00:16:48.386 } 00:16:48.386 ] 00:16:48.386 }' 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.386 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.386 [2024-10-01 13:50:58.491106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:48.386 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:48.386 Zero copy mechanism will not be used. 00:16:48.386 Running I/O for 60 seconds... 00:16:48.645 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.645 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.645 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.645 [2024-10-01 13:50:58.811355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.903 13:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.903 13:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:48.903 [2024-10-01 13:50:58.880588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:48.903 [2024-10-01 13:50:58.882923] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.903 [2024-10-01 13:50:59.000057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:48.903 [2024-10-01 13:50:59.001297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:49.161 [2024-10-01 13:50:59.212610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:49.161 [2024-10-01 13:50:59.212939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:49.419 [2024-10-01 13:50:59.449115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:49.419 [2024-10-01 13:50:59.449693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:49.678 174.00 IOPS, 522.00 MiB/s 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.678 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.678 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.678 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.678 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.678 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.678 13:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.678 13:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.963 13:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.963 "name": "raid_bdev1", 00:16:49.963 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:49.963 "strip_size_kb": 0, 00:16:49.963 "state": "online", 00:16:49.963 "raid_level": "raid1", 00:16:49.963 "superblock": false, 00:16:49.963 "num_base_bdevs": 4, 00:16:49.963 "num_base_bdevs_discovered": 4, 00:16:49.963 "num_base_bdevs_operational": 4, 00:16:49.963 "process": { 00:16:49.963 "type": "rebuild", 00:16:49.963 "target": "spare", 00:16:49.963 "progress": { 00:16:49.963 "blocks": 12288, 00:16:49.963 "percent": 18 00:16:49.963 } 00:16:49.963 }, 00:16:49.963 "base_bdevs_list": [ 00:16:49.963 { 00:16:49.963 "name": "spare", 00:16:49.963 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:49.963 "is_configured": true, 00:16:49.963 "data_offset": 0, 00:16:49.963 "data_size": 65536 00:16:49.963 }, 00:16:49.963 { 00:16:49.963 "name": "BaseBdev2", 00:16:49.963 "uuid": "7bb3dede-d7ec-5c02-9535-5f88709d4944", 00:16:49.963 "is_configured": true, 00:16:49.963 "data_offset": 0, 00:16:49.963 "data_size": 65536 00:16:49.963 }, 00:16:49.963 { 00:16:49.963 "name": "BaseBdev3", 00:16:49.963 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:49.963 "is_configured": true, 00:16:49.963 "data_offset": 0, 00:16:49.963 "data_size": 65536 00:16:49.963 }, 00:16:49.963 { 00:16:49.963 "name": "BaseBdev4", 00:16:49.963 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:49.963 "is_configured": true, 00:16:49.963 "data_offset": 0, 00:16:49.963 "data_size": 65536 00:16:49.963 } 00:16:49.963 ] 00:16:49.963 }' 00:16:49.963 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.963 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.963 13:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 [2024-10-01 13:51:00.016330] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.963 [2024-10-01 13:51:00.045002] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.963 [2024-10-01 13:51:00.056307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.963 [2024-10-01 13:51:00.056367] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.963 [2024-10-01 13:51:00.056388] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.963 [2024-10-01 13:51:00.094103] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.963 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.276 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.276 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.276 "name": "raid_bdev1", 00:16:50.276 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:50.276 "strip_size_kb": 0, 00:16:50.276 "state": "online", 00:16:50.276 "raid_level": "raid1", 00:16:50.276 "superblock": false, 00:16:50.276 "num_base_bdevs": 4, 00:16:50.276 "num_base_bdevs_discovered": 3, 00:16:50.276 "num_base_bdevs_operational": 3, 00:16:50.276 "base_bdevs_list": [ 00:16:50.276 { 00:16:50.276 "name": null, 00:16:50.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.276 "is_configured": false, 00:16:50.276 "data_offset": 0, 00:16:50.276 "data_size": 65536 00:16:50.276 }, 00:16:50.276 { 00:16:50.276 "name": "BaseBdev2", 00:16:50.276 "uuid": "7bb3dede-d7ec-5c02-9535-5f88709d4944", 00:16:50.276 "is_configured": true, 00:16:50.276 "data_offset": 0, 00:16:50.276 "data_size": 65536 00:16:50.276 }, 00:16:50.276 { 00:16:50.276 "name": "BaseBdev3", 00:16:50.276 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:50.276 "is_configured": true, 00:16:50.276 "data_offset": 0, 00:16:50.276 "data_size": 65536 00:16:50.276 }, 00:16:50.276 { 00:16:50.276 "name": "BaseBdev4", 00:16:50.276 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:50.276 "is_configured": true, 00:16:50.276 "data_offset": 0, 00:16:50.276 "data_size": 65536 00:16:50.276 } 00:16:50.276 ] 00:16:50.276 }' 00:16:50.276 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.276 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.535 162.00 IOPS, 486.00 MiB/s 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.535 "name": "raid_bdev1", 00:16:50.535 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:50.535 "strip_size_kb": 0, 00:16:50.535 "state": "online", 00:16:50.535 "raid_level": "raid1", 00:16:50.535 "superblock": false, 00:16:50.535 "num_base_bdevs": 4, 00:16:50.535 "num_base_bdevs_discovered": 3, 00:16:50.535 "num_base_bdevs_operational": 3, 00:16:50.535 "base_bdevs_list": [ 00:16:50.535 { 00:16:50.535 "name": null, 00:16:50.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.535 "is_configured": false, 00:16:50.535 "data_offset": 0, 00:16:50.535 "data_size": 65536 00:16:50.535 }, 00:16:50.535 { 00:16:50.535 "name": "BaseBdev2", 00:16:50.535 "uuid": "7bb3dede-d7ec-5c02-9535-5f88709d4944", 00:16:50.535 "is_configured": true, 00:16:50.535 "data_offset": 0, 00:16:50.535 "data_size": 65536 00:16:50.535 }, 00:16:50.535 { 00:16:50.535 "name": "BaseBdev3", 00:16:50.535 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:50.535 "is_configured": true, 00:16:50.535 "data_offset": 0, 00:16:50.535 "data_size": 65536 00:16:50.535 }, 00:16:50.535 { 00:16:50.535 "name": "BaseBdev4", 00:16:50.535 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:50.535 "is_configured": true, 00:16:50.535 "data_offset": 0, 00:16:50.535 "data_size": 65536 00:16:50.535 } 00:16:50.535 ] 00:16:50.535 }' 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.535 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.535 [2024-10-01 13:51:00.722911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.793 13:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.793 13:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:50.793 [2024-10-01 13:51:00.784218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:50.793 [2024-10-01 13:51:00.786519] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.793 [2024-10-01 13:51:00.904060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:50.793 [2024-10-01 13:51:00.904615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:51.052 [2024-10-01 13:51:01.116197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:51.052 [2024-10-01 13:51:01.116986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:51.310 [2024-10-01 13:51:01.448522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:51.568 159.67 IOPS, 479.00 MiB/s [2024-10-01 13:51:01.559842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:51.568 [2024-10-01 13:51:01.560168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.827 "name": "raid_bdev1", 00:16:51.827 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:51.827 "strip_size_kb": 0, 00:16:51.827 "state": "online", 00:16:51.827 "raid_level": "raid1", 00:16:51.827 "superblock": false, 00:16:51.827 "num_base_bdevs": 4, 00:16:51.827 "num_base_bdevs_discovered": 4, 00:16:51.827 "num_base_bdevs_operational": 4, 00:16:51.827 "process": { 00:16:51.827 "type": "rebuild", 00:16:51.827 "target": "spare", 00:16:51.827 "progress": { 00:16:51.827 "blocks": 12288, 00:16:51.827 "percent": 18 00:16:51.827 } 00:16:51.827 }, 00:16:51.827 "base_bdevs_list": [ 00:16:51.827 { 00:16:51.827 "name": "spare", 00:16:51.827 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:51.827 "is_configured": true, 00:16:51.827 "data_offset": 0, 00:16:51.827 "data_size": 65536 00:16:51.827 }, 00:16:51.827 { 00:16:51.827 "name": "BaseBdev2", 00:16:51.827 "uuid": "7bb3dede-d7ec-5c02-9535-5f88709d4944", 00:16:51.827 "is_configured": true, 00:16:51.827 "data_offset": 0, 00:16:51.827 "data_size": 65536 00:16:51.827 }, 00:16:51.827 { 00:16:51.827 "name": "BaseBdev3", 00:16:51.827 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:51.827 "is_configured": true, 00:16:51.827 "data_offset": 0, 00:16:51.827 "data_size": 65536 00:16:51.827 }, 00:16:51.827 { 00:16:51.827 "name": "BaseBdev4", 00:16:51.827 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:51.827 "is_configured": true, 00:16:51.827 "data_offset": 0, 00:16:51.827 "data_size": 65536 00:16:51.827 } 00:16:51.827 ] 00:16:51.827 }' 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.827 [2024-10-01 13:51:01.897887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.827 13:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.827 [2024-10-01 13:51:01.915240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.086 [2024-10-01 13:51:02.030598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:52.086 [2024-10-01 13:51:02.030901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:52.086 [2024-10-01 13:51:02.133553] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:52.086 [2024-10-01 13:51:02.133623] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:52.086 [2024-10-01 13:51:02.137544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.086 "name": "raid_bdev1", 00:16:52.086 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:52.086 "strip_size_kb": 0, 00:16:52.086 "state": "online", 00:16:52.086 "raid_level": "raid1", 00:16:52.086 "superblock": false, 00:16:52.086 "num_base_bdevs": 4, 00:16:52.086 "num_base_bdevs_discovered": 3, 00:16:52.086 "num_base_bdevs_operational": 3, 00:16:52.086 "process": { 00:16:52.086 "type": "rebuild", 00:16:52.086 "target": "spare", 00:16:52.086 "progress": { 00:16:52.086 "blocks": 16384, 00:16:52.086 "percent": 25 00:16:52.086 } 00:16:52.086 }, 00:16:52.086 "base_bdevs_list": [ 00:16:52.086 { 00:16:52.086 "name": "spare", 00:16:52.086 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:52.086 "is_configured": true, 00:16:52.086 "data_offset": 0, 00:16:52.086 "data_size": 65536 00:16:52.086 }, 00:16:52.086 { 00:16:52.086 "name": null, 00:16:52.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.086 "is_configured": false, 00:16:52.086 "data_offset": 0, 00:16:52.086 "data_size": 65536 00:16:52.086 }, 00:16:52.086 { 00:16:52.086 "name": "BaseBdev3", 00:16:52.086 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:52.086 "is_configured": true, 00:16:52.086 "data_offset": 0, 00:16:52.086 "data_size": 65536 00:16:52.086 }, 00:16:52.086 { 00:16:52.086 "name": "BaseBdev4", 00:16:52.086 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:52.086 "is_configured": true, 00:16:52.086 "data_offset": 0, 00:16:52.086 "data_size": 65536 00:16:52.086 } 00:16:52.086 ] 00:16:52.086 }' 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.086 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=497 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.345 "name": "raid_bdev1", 00:16:52.345 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:52.345 "strip_size_kb": 0, 00:16:52.345 "state": "online", 00:16:52.345 "raid_level": "raid1", 00:16:52.345 "superblock": false, 00:16:52.345 "num_base_bdevs": 4, 00:16:52.345 "num_base_bdevs_discovered": 3, 00:16:52.345 "num_base_bdevs_operational": 3, 00:16:52.345 "process": { 00:16:52.345 "type": "rebuild", 00:16:52.345 "target": "spare", 00:16:52.345 "progress": { 00:16:52.345 "blocks": 18432, 00:16:52.345 "percent": 28 00:16:52.345 } 00:16:52.345 }, 00:16:52.345 "base_bdevs_list": [ 00:16:52.345 { 00:16:52.345 "name": "spare", 00:16:52.345 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:52.345 "is_configured": true, 00:16:52.345 "data_offset": 0, 00:16:52.345 "data_size": 65536 00:16:52.345 }, 00:16:52.345 { 00:16:52.345 "name": null, 00:16:52.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.345 "is_configured": false, 00:16:52.345 "data_offset": 0, 00:16:52.345 "data_size": 65536 00:16:52.345 }, 00:16:52.345 { 00:16:52.345 "name": "BaseBdev3", 00:16:52.345 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:52.345 "is_configured": true, 00:16:52.345 "data_offset": 0, 00:16:52.345 "data_size": 65536 00:16:52.345 }, 00:16:52.345 { 00:16:52.345 "name": "BaseBdev4", 00:16:52.345 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:52.345 "is_configured": true, 00:16:52.345 "data_offset": 0, 00:16:52.345 "data_size": 65536 00:16:52.345 } 00:16:52.345 ] 00:16:52.345 }' 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.345 [2024-10-01 13:51:02.372636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:52.345 [2024-10-01 13:51:02.373175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.345 13:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.345 [2024-10-01 13:51:02.490882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:52.913 140.50 IOPS, 421.50 MiB/s [2024-10-01 13:51:03.093623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:53.172 [2024-10-01 13:51:03.330276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:53.172 [2024-10-01 13:51:03.330532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.431 "name": "raid_bdev1", 00:16:53.431 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:53.431 "strip_size_kb": 0, 00:16:53.431 "state": "online", 00:16:53.431 "raid_level": "raid1", 00:16:53.431 "superblock": false, 00:16:53.431 "num_base_bdevs": 4, 00:16:53.431 "num_base_bdevs_discovered": 3, 00:16:53.431 "num_base_bdevs_operational": 3, 00:16:53.431 "process": { 00:16:53.431 "type": "rebuild", 00:16:53.431 "target": "spare", 00:16:53.431 "progress": { 00:16:53.431 "blocks": 34816, 00:16:53.431 "percent": 53 00:16:53.431 } 00:16:53.431 }, 00:16:53.431 "base_bdevs_list": [ 00:16:53.431 { 00:16:53.431 "name": "spare", 00:16:53.431 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:53.431 "is_configured": true, 00:16:53.431 "data_offset": 0, 00:16:53.431 "data_size": 65536 00:16:53.431 }, 00:16:53.431 { 00:16:53.431 "name": null, 00:16:53.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.431 "is_configured": false, 00:16:53.431 "data_offset": 0, 00:16:53.431 "data_size": 65536 00:16:53.431 }, 00:16:53.431 { 00:16:53.431 "name": "BaseBdev3", 00:16:53.431 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:53.431 "is_configured": true, 00:16:53.431 "data_offset": 0, 00:16:53.431 "data_size": 65536 00:16:53.431 }, 00:16:53.431 { 00:16:53.431 "name": "BaseBdev4", 00:16:53.431 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:53.431 "is_configured": true, 00:16:53.431 "data_offset": 0, 00:16:53.431 "data_size": 65536 00:16:53.431 } 00:16:53.431 ] 00:16:53.431 }' 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.431 122.20 IOPS, 366.60 MiB/s 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.431 13:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.998 [2024-10-01 13:51:04.004388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:54.257 [2024-10-01 13:51:04.338495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:54.516 108.00 IOPS, 324.00 MiB/s 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.516 [2024-10-01 13:51:04.570361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.516 "name": "raid_bdev1", 00:16:54.516 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:54.516 "strip_size_kb": 0, 00:16:54.516 "state": "online", 00:16:54.516 "raid_level": "raid1", 00:16:54.516 "superblock": false, 00:16:54.516 "num_base_bdevs": 4, 00:16:54.516 "num_base_bdevs_discovered": 3, 00:16:54.516 "num_base_bdevs_operational": 3, 00:16:54.516 "process": { 00:16:54.516 "type": "rebuild", 00:16:54.516 "target": "spare", 00:16:54.516 "progress": { 00:16:54.516 "blocks": 53248, 00:16:54.516 "percent": 81 00:16:54.516 } 00:16:54.516 }, 00:16:54.516 "base_bdevs_list": [ 00:16:54.516 { 00:16:54.516 "name": "spare", 00:16:54.516 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:54.516 "is_configured": true, 00:16:54.516 "data_offset": 0, 00:16:54.516 "data_size": 65536 00:16:54.516 }, 00:16:54.516 { 00:16:54.516 "name": null, 00:16:54.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.516 "is_configured": false, 00:16:54.516 "data_offset": 0, 00:16:54.516 "data_size": 65536 00:16:54.516 }, 00:16:54.516 { 00:16:54.516 "name": "BaseBdev3", 00:16:54.516 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:54.516 "is_configured": true, 00:16:54.516 "data_offset": 0, 00:16:54.516 "data_size": 65536 00:16:54.516 }, 00:16:54.516 { 00:16:54.516 "name": "BaseBdev4", 00:16:54.516 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:54.516 "is_configured": true, 00:16:54.516 "data_offset": 0, 00:16:54.516 "data_size": 65536 00:16:54.516 } 00:16:54.516 ] 00:16:54.516 }' 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.516 13:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.774 [2024-10-01 13:51:04.905830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:55.033 [2024-10-01 13:51:05.127202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:55.597 96.71 IOPS, 290.14 MiB/s [2024-10-01 13:51:05.553054] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:55.597 [2024-10-01 13:51:05.658525] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:55.597 [2024-10-01 13:51:05.662607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.597 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.597 "name": "raid_bdev1", 00:16:55.597 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:55.597 "strip_size_kb": 0, 00:16:55.597 "state": "online", 00:16:55.597 "raid_level": "raid1", 00:16:55.597 "superblock": false, 00:16:55.598 "num_base_bdevs": 4, 00:16:55.598 "num_base_bdevs_discovered": 3, 00:16:55.598 "num_base_bdevs_operational": 3, 00:16:55.598 "base_bdevs_list": [ 00:16:55.598 { 00:16:55.598 "name": "spare", 00:16:55.598 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:55.598 "is_configured": true, 00:16:55.598 "data_offset": 0, 00:16:55.598 "data_size": 65536 00:16:55.598 }, 00:16:55.598 { 00:16:55.598 "name": null, 00:16:55.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.598 "is_configured": false, 00:16:55.598 "data_offset": 0, 00:16:55.598 "data_size": 65536 00:16:55.598 }, 00:16:55.598 { 00:16:55.598 "name": "BaseBdev3", 00:16:55.598 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:55.598 "is_configured": true, 00:16:55.598 "data_offset": 0, 00:16:55.598 "data_size": 65536 00:16:55.598 }, 00:16:55.598 { 00:16:55.598 "name": "BaseBdev4", 00:16:55.598 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:55.598 "is_configured": true, 00:16:55.598 "data_offset": 0, 00:16:55.598 "data_size": 65536 00:16:55.598 } 00:16:55.598 ] 00:16:55.598 }' 00:16:55.598 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.598 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.855 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.856 "name": "raid_bdev1", 00:16:55.856 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:55.856 "strip_size_kb": 0, 00:16:55.856 "state": "online", 00:16:55.856 "raid_level": "raid1", 00:16:55.856 "superblock": false, 00:16:55.856 "num_base_bdevs": 4, 00:16:55.856 "num_base_bdevs_discovered": 3, 00:16:55.856 "num_base_bdevs_operational": 3, 00:16:55.856 "base_bdevs_list": [ 00:16:55.856 { 00:16:55.856 "name": "spare", 00:16:55.856 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:55.856 "is_configured": true, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 }, 00:16:55.856 { 00:16:55.856 "name": null, 00:16:55.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.856 "is_configured": false, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 }, 00:16:55.856 { 00:16:55.856 "name": "BaseBdev3", 00:16:55.856 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:55.856 "is_configured": true, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 }, 00:16:55.856 { 00:16:55.856 "name": "BaseBdev4", 00:16:55.856 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:55.856 "is_configured": true, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 } 00:16:55.856 ] 00:16:55.856 }' 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.856 13:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.856 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.856 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.856 "name": "raid_bdev1", 00:16:55.856 "uuid": "a776c60f-848e-42ad-8478-5226c9e0eb42", 00:16:55.856 "strip_size_kb": 0, 00:16:55.856 "state": "online", 00:16:55.856 "raid_level": "raid1", 00:16:55.856 "superblock": false, 00:16:55.856 "num_base_bdevs": 4, 00:16:55.856 "num_base_bdevs_discovered": 3, 00:16:55.856 "num_base_bdevs_operational": 3, 00:16:55.856 "base_bdevs_list": [ 00:16:55.856 { 00:16:55.856 "name": "spare", 00:16:55.856 "uuid": "45fb51da-9c53-5f86-b644-e844145bffcf", 00:16:55.856 "is_configured": true, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 }, 00:16:55.856 { 00:16:55.856 "name": null, 00:16:55.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.856 "is_configured": false, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 }, 00:16:55.856 { 00:16:55.856 "name": "BaseBdev3", 00:16:55.856 "uuid": "50249e28-2887-57f3-b5dc-ce2f20bec139", 00:16:55.856 "is_configured": true, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 }, 00:16:55.856 { 00:16:55.856 "name": "BaseBdev4", 00:16:55.856 "uuid": "42deff6b-40a4-50a1-8112-981146f76b86", 00:16:55.856 "is_configured": true, 00:16:55.856 "data_offset": 0, 00:16:55.856 "data_size": 65536 00:16:55.856 } 00:16:55.856 ] 00:16:55.856 }' 00:16:55.856 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.856 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.422 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:56.422 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.422 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.422 [2024-10-01 13:51:06.355845] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.422 [2024-10-01 13:51:06.355880] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.422 00:16:56.423 Latency(us) 00:16:56.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.423 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:56.423 raid_bdev1 : 7.94 88.30 264.90 0.00 0.00 16384.27 296.10 111174.32 00:16:56.423 =================================================================================================================== 00:16:56.423 Total : 88.30 264.90 0.00 0.00 16384.27 296.10 111174.32 00:16:56.423 [2024-10-01 13:51:06.443004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.423 [2024-10-01 13:51:06.443062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.423 [2024-10-01 13:51:06.443167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.423 [2024-10-01 13:51:06.443184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:56.423 { 00:16:56.423 "results": [ 00:16:56.423 { 00:16:56.423 "job": "raid_bdev1", 00:16:56.423 "core_mask": "0x1", 00:16:56.423 "workload": "randrw", 00:16:56.423 "percentage": 50, 00:16:56.423 "status": "finished", 00:16:56.423 "queue_depth": 2, 00:16:56.423 "io_size": 3145728, 00:16:56.423 "runtime": 7.938971, 00:16:56.423 "iops": 88.29859688365154, 00:16:56.423 "mibps": 264.8957906509546, 00:16:56.423 "io_failed": 0, 00:16:56.423 "io_timeout": 0, 00:16:56.423 "avg_latency_us": 16384.27220780411, 00:16:56.423 "min_latency_us": 296.09638554216866, 00:16:56.423 "max_latency_us": 111174.32289156626 00:16:56.423 } 00:16:56.423 ], 00:16:56.423 "core_count": 1 00:16:56.423 } 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.423 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:56.680 /dev/nbd0 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.680 1+0 records in 00:16:56.680 1+0 records out 00:16:56.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408698 s, 10.0 MB/s 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.680 13:51:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:56.938 /dev/nbd1 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.938 1+0 records in 00:16:56.938 1+0 records out 00:16:56.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427065 s, 9.6 MB/s 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.938 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:57.197 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:57.197 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.197 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:57.197 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.197 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:57.197 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.197 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:57.455 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.456 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:57.715 /dev/nbd1 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.715 1+0 records in 00:16:57.715 1+0 records out 00:16:57.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285106 s, 14.4 MB/s 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.715 13:51:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.974 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78759 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78759 ']' 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78759 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78759 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78759' 00:16:58.234 killing process with pid 78759 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78759 00:16:58.234 Received shutdown signal, test time was about 9.895517 seconds 00:16:58.234 00:16:58.234 Latency(us) 00:16:58.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.234 =================================================================================================================== 00:16:58.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.234 [2024-10-01 13:51:08.373051] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.234 13:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78759 00:16:58.802 [2024-10-01 13:51:08.810798] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.184 ************************************ 00:17:00.184 END TEST raid_rebuild_test_io 00:17:00.184 ************************************ 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:00.185 00:17:00.185 real 0m13.559s 00:17:00.185 user 0m16.861s 00:17:00.185 sys 0m2.150s 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 13:51:10 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:00.185 13:51:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:00.185 13:51:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.185 13:51:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 ************************************ 00:17:00.185 START TEST raid_rebuild_test_sb_io 00:17:00.185 ************************************ 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79168 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79168 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79168 ']' 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:00.185 13:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:00.185 Zero copy mechanism will not be used. 00:17:00.185 [2024-10-01 13:51:10.327967] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:00.185 [2024-10-01 13:51:10.328089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79168 ] 00:17:00.443 [2024-10-01 13:51:10.495854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.700 [2024-10-01 13:51:10.711836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.958 [2024-10-01 13:51:10.931611] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.958 [2024-10-01 13:51:10.931655] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 BaseBdev1_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 [2024-10-01 13:51:11.210928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:01.220 [2024-10-01 13:51:11.211003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.220 [2024-10-01 13:51:11.211027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.220 [2024-10-01 13:51:11.211045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.220 [2024-10-01 13:51:11.213492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.220 [2024-10-01 13:51:11.213646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.220 BaseBdev1 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 BaseBdev2_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 [2024-10-01 13:51:11.278655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:01.220 [2024-10-01 13:51:11.278724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.220 [2024-10-01 13:51:11.278746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.220 [2024-10-01 13:51:11.278760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.220 [2024-10-01 13:51:11.281090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.220 [2024-10-01 13:51:11.281133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:01.220 BaseBdev2 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 BaseBdev3_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 [2024-10-01 13:51:11.335554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:01.220 [2024-10-01 13:51:11.335764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.220 [2024-10-01 13:51:11.335799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:01.220 [2024-10-01 13:51:11.335814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.220 [2024-10-01 13:51:11.338204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.220 [2024-10-01 13:51:11.338251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:01.220 BaseBdev3 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 BaseBdev4_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 [2024-10-01 13:51:11.393623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:01.220 [2024-10-01 13:51:11.393698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.220 [2024-10-01 13:51:11.393721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:01.220 [2024-10-01 13:51:11.393736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.220 [2024-10-01 13:51:11.396095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.220 [2024-10-01 13:51:11.396140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:01.220 BaseBdev4 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 spare_malloc 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 spare_delay 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 [2024-10-01 13:51:11.462066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:01.479 [2024-10-01 13:51:11.462138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.479 [2024-10-01 13:51:11.462161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:01.479 [2024-10-01 13:51:11.462175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.479 [2024-10-01 13:51:11.464635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.479 [2024-10-01 13:51:11.464680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:01.479 spare 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 [2024-10-01 13:51:11.474120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.479 [2024-10-01 13:51:11.476318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.479 [2024-10-01 13:51:11.476390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.479 [2024-10-01 13:51:11.476476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:01.479 [2024-10-01 13:51:11.476659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:01.479 [2024-10-01 13:51:11.476675] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:01.479 [2024-10-01 13:51:11.476983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:01.479 [2024-10-01 13:51:11.477185] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:01.479 [2024-10-01 13:51:11.477196] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:01.479 [2024-10-01 13:51:11.477377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.479 "name": "raid_bdev1", 00:17:01.479 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:01.479 "strip_size_kb": 0, 00:17:01.479 "state": "online", 00:17:01.479 "raid_level": "raid1", 00:17:01.479 "superblock": true, 00:17:01.479 "num_base_bdevs": 4, 00:17:01.479 "num_base_bdevs_discovered": 4, 00:17:01.479 "num_base_bdevs_operational": 4, 00:17:01.479 "base_bdevs_list": [ 00:17:01.479 { 00:17:01.479 "name": "BaseBdev1", 00:17:01.479 "uuid": "141de80a-5140-5304-bb7c-055f4a3c8bb4", 00:17:01.479 "is_configured": true, 00:17:01.479 "data_offset": 2048, 00:17:01.479 "data_size": 63488 00:17:01.479 }, 00:17:01.479 { 00:17:01.479 "name": "BaseBdev2", 00:17:01.479 "uuid": "43114785-63bb-5e44-b461-4fad3a6a2465", 00:17:01.479 "is_configured": true, 00:17:01.479 "data_offset": 2048, 00:17:01.479 "data_size": 63488 00:17:01.479 }, 00:17:01.479 { 00:17:01.479 "name": "BaseBdev3", 00:17:01.479 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:01.479 "is_configured": true, 00:17:01.479 "data_offset": 2048, 00:17:01.479 "data_size": 63488 00:17:01.479 }, 00:17:01.479 { 00:17:01.479 "name": "BaseBdev4", 00:17:01.479 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:01.479 "is_configured": true, 00:17:01.479 "data_offset": 2048, 00:17:01.479 "data_size": 63488 00:17:01.479 } 00:17:01.479 ] 00:17:01.479 }' 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.479 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.736 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.736 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:01.736 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.736 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.736 [2024-10-01 13:51:11.853865] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.737 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.737 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:01.737 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.737 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:01.737 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.737 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.737 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.995 [2024-10-01 13:51:11.953386] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.995 13:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.995 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.995 "name": "raid_bdev1", 00:17:01.995 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:01.995 "strip_size_kb": 0, 00:17:01.995 "state": "online", 00:17:01.995 "raid_level": "raid1", 00:17:01.995 "superblock": true, 00:17:01.995 "num_base_bdevs": 4, 00:17:01.995 "num_base_bdevs_discovered": 3, 00:17:01.995 "num_base_bdevs_operational": 3, 00:17:01.995 "base_bdevs_list": [ 00:17:01.995 { 00:17:01.995 "name": null, 00:17:01.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.995 "is_configured": false, 00:17:01.995 "data_offset": 0, 00:17:01.995 "data_size": 63488 00:17:01.995 }, 00:17:01.995 { 00:17:01.995 "name": "BaseBdev2", 00:17:01.995 "uuid": "43114785-63bb-5e44-b461-4fad3a6a2465", 00:17:01.995 "is_configured": true, 00:17:01.995 "data_offset": 2048, 00:17:01.995 "data_size": 63488 00:17:01.995 }, 00:17:01.995 { 00:17:01.995 "name": "BaseBdev3", 00:17:01.995 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:01.995 "is_configured": true, 00:17:01.995 "data_offset": 2048, 00:17:01.995 "data_size": 63488 00:17:01.995 }, 00:17:01.995 { 00:17:01.995 "name": "BaseBdev4", 00:17:01.995 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:01.995 "is_configured": true, 00:17:01.995 "data_offset": 2048, 00:17:01.995 "data_size": 63488 00:17:01.995 } 00:17:01.995 ] 00:17:01.995 }' 00:17:01.995 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.995 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.995 [2024-10-01 13:51:12.049814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:01.995 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:01.995 Zero copy mechanism will not be used. 00:17:01.995 Running I/O for 60 seconds... 00:17:02.253 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.253 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.253 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.253 [2024-10-01 13:51:12.381686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.253 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.253 13:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:02.510 [2024-10-01 13:51:12.448287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:02.510 [2024-10-01 13:51:12.450491] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.510 [2024-10-01 13:51:12.573666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:02.510 [2024-10-01 13:51:12.574971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:02.769 [2024-10-01 13:51:12.802419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:02.769 [2024-10-01 13:51:12.802942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:03.028 [2024-10-01 13:51:13.040561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:03.287 148.00 IOPS, 444.00 MiB/s [2024-10-01 13:51:13.265580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:03.287 [2024-10-01 13:51:13.266516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.287 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.287 "name": "raid_bdev1", 00:17:03.287 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:03.287 "strip_size_kb": 0, 00:17:03.287 "state": "online", 00:17:03.287 "raid_level": "raid1", 00:17:03.287 "superblock": true, 00:17:03.287 "num_base_bdevs": 4, 00:17:03.287 "num_base_bdevs_discovered": 4, 00:17:03.287 "num_base_bdevs_operational": 4, 00:17:03.287 "process": { 00:17:03.287 "type": "rebuild", 00:17:03.287 "target": "spare", 00:17:03.287 "progress": { 00:17:03.287 "blocks": 10240, 00:17:03.287 "percent": 16 00:17:03.287 } 00:17:03.287 }, 00:17:03.287 "base_bdevs_list": [ 00:17:03.287 { 00:17:03.287 "name": "spare", 00:17:03.287 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:03.287 "is_configured": true, 00:17:03.287 "data_offset": 2048, 00:17:03.287 "data_size": 63488 00:17:03.287 }, 00:17:03.287 { 00:17:03.287 "name": "BaseBdev2", 00:17:03.287 "uuid": "43114785-63bb-5e44-b461-4fad3a6a2465", 00:17:03.287 "is_configured": true, 00:17:03.287 "data_offset": 2048, 00:17:03.287 "data_size": 63488 00:17:03.287 }, 00:17:03.287 { 00:17:03.287 "name": "BaseBdev3", 00:17:03.287 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:03.287 "is_configured": true, 00:17:03.287 "data_offset": 2048, 00:17:03.287 "data_size": 63488 00:17:03.287 }, 00:17:03.287 { 00:17:03.287 "name": "BaseBdev4", 00:17:03.287 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:03.288 "is_configured": true, 00:17:03.288 "data_offset": 2048, 00:17:03.288 "data_size": 63488 00:17:03.288 } 00:17:03.288 ] 00:17:03.288 }' 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.546 [2024-10-01 13:51:13.575411] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.546 [2024-10-01 13:51:13.615650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:03.546 [2024-10-01 13:51:13.637352] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:03.546 [2024-10-01 13:51:13.649276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.546 [2024-10-01 13:51:13.649347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.546 [2024-10-01 13:51:13.649368] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:03.546 [2024-10-01 13:51:13.679392] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.546 "name": "raid_bdev1", 00:17:03.546 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:03.546 "strip_size_kb": 0, 00:17:03.546 "state": "online", 00:17:03.546 "raid_level": "raid1", 00:17:03.546 "superblock": true, 00:17:03.546 "num_base_bdevs": 4, 00:17:03.546 "num_base_bdevs_discovered": 3, 00:17:03.546 "num_base_bdevs_operational": 3, 00:17:03.546 "base_bdevs_list": [ 00:17:03.546 { 00:17:03.546 "name": null, 00:17:03.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.546 "is_configured": false, 00:17:03.546 "data_offset": 0, 00:17:03.546 "data_size": 63488 00:17:03.546 }, 00:17:03.546 { 00:17:03.546 "name": "BaseBdev2", 00:17:03.546 "uuid": "43114785-63bb-5e44-b461-4fad3a6a2465", 00:17:03.546 "is_configured": true, 00:17:03.546 "data_offset": 2048, 00:17:03.546 "data_size": 63488 00:17:03.546 }, 00:17:03.546 { 00:17:03.546 "name": "BaseBdev3", 00:17:03.546 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:03.546 "is_configured": true, 00:17:03.546 "data_offset": 2048, 00:17:03.546 "data_size": 63488 00:17:03.546 }, 00:17:03.546 { 00:17:03.546 "name": "BaseBdev4", 00:17:03.546 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:03.546 "is_configured": true, 00:17:03.546 "data_offset": 2048, 00:17:03.546 "data_size": 63488 00:17:03.546 } 00:17:03.546 ] 00:17:03.546 }' 00:17:03.546 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.804 13:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.063 152.50 IOPS, 457.50 MiB/s 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.063 "name": "raid_bdev1", 00:17:04.063 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:04.063 "strip_size_kb": 0, 00:17:04.063 "state": "online", 00:17:04.063 "raid_level": "raid1", 00:17:04.063 "superblock": true, 00:17:04.063 "num_base_bdevs": 4, 00:17:04.063 "num_base_bdevs_discovered": 3, 00:17:04.063 "num_base_bdevs_operational": 3, 00:17:04.063 "base_bdevs_list": [ 00:17:04.063 { 00:17:04.063 "name": null, 00:17:04.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.063 "is_configured": false, 00:17:04.063 "data_offset": 0, 00:17:04.063 "data_size": 63488 00:17:04.063 }, 00:17:04.063 { 00:17:04.063 "name": "BaseBdev2", 00:17:04.063 "uuid": "43114785-63bb-5e44-b461-4fad3a6a2465", 00:17:04.063 "is_configured": true, 00:17:04.063 "data_offset": 2048, 00:17:04.063 "data_size": 63488 00:17:04.063 }, 00:17:04.063 { 00:17:04.063 "name": "BaseBdev3", 00:17:04.063 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:04.063 "is_configured": true, 00:17:04.063 "data_offset": 2048, 00:17:04.063 "data_size": 63488 00:17:04.063 }, 00:17:04.063 { 00:17:04.063 "name": "BaseBdev4", 00:17:04.063 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:04.063 "is_configured": true, 00:17:04.063 "data_offset": 2048, 00:17:04.063 "data_size": 63488 00:17:04.063 } 00:17:04.063 ] 00:17:04.063 }' 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.063 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.322 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.322 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.322 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:04.322 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.322 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.322 [2024-10-01 13:51:14.307784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.322 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.322 13:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:04.322 [2024-10-01 13:51:14.392921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:04.322 [2024-10-01 13:51:14.395134] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.322 [2024-10-01 13:51:14.511262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:04.322 [2024-10-01 13:51:14.511873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:04.581 [2024-10-01 13:51:14.651996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:04.841 [2024-10-01 13:51:14.995534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:04.841 [2024-10-01 13:51:14.996316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:05.100 147.33 IOPS, 442.00 MiB/s [2024-10-01 13:51:15.229254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:05.100 [2024-10-01 13:51:15.230258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.359 "name": "raid_bdev1", 00:17:05.359 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:05.359 "strip_size_kb": 0, 00:17:05.359 "state": "online", 00:17:05.359 "raid_level": "raid1", 00:17:05.359 "superblock": true, 00:17:05.359 "num_base_bdevs": 4, 00:17:05.359 "num_base_bdevs_discovered": 4, 00:17:05.359 "num_base_bdevs_operational": 4, 00:17:05.359 "process": { 00:17:05.359 "type": "rebuild", 00:17:05.359 "target": "spare", 00:17:05.359 "progress": { 00:17:05.359 "blocks": 10240, 00:17:05.359 "percent": 16 00:17:05.359 } 00:17:05.359 }, 00:17:05.359 "base_bdevs_list": [ 00:17:05.359 { 00:17:05.359 "name": "spare", 00:17:05.359 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:05.359 "is_configured": true, 00:17:05.359 "data_offset": 2048, 00:17:05.359 "data_size": 63488 00:17:05.359 }, 00:17:05.359 { 00:17:05.359 "name": "BaseBdev2", 00:17:05.359 "uuid": "43114785-63bb-5e44-b461-4fad3a6a2465", 00:17:05.359 "is_configured": true, 00:17:05.359 "data_offset": 2048, 00:17:05.359 "data_size": 63488 00:17:05.359 }, 00:17:05.359 { 00:17:05.359 "name": "BaseBdev3", 00:17:05.359 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:05.359 "is_configured": true, 00:17:05.359 "data_offset": 2048, 00:17:05.359 "data_size": 63488 00:17:05.359 }, 00:17:05.359 { 00:17:05.359 "name": "BaseBdev4", 00:17:05.359 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:05.359 "is_configured": true, 00:17:05.359 "data_offset": 2048, 00:17:05.359 "data_size": 63488 00:17:05.359 } 00:17:05.359 ] 00:17:05.359 }' 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:05.359 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.359 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.360 [2024-10-01 13:51:15.522676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.618 [2024-10-01 13:51:15.585475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:05.618 [2024-10-01 13:51:15.586925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:05.618 [2024-10-01 13:51:15.788542] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:05.618 [2024-10-01 13:51:15.788811] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.618 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.877 "name": "raid_bdev1", 00:17:05.877 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:05.877 "strip_size_kb": 0, 00:17:05.877 "state": "online", 00:17:05.877 "raid_level": "raid1", 00:17:05.877 "superblock": true, 00:17:05.877 "num_base_bdevs": 4, 00:17:05.877 "num_base_bdevs_discovered": 3, 00:17:05.877 "num_base_bdevs_operational": 3, 00:17:05.877 "process": { 00:17:05.877 "type": "rebuild", 00:17:05.877 "target": "spare", 00:17:05.877 "progress": { 00:17:05.877 "blocks": 14336, 00:17:05.877 "percent": 22 00:17:05.877 } 00:17:05.877 }, 00:17:05.877 "base_bdevs_list": [ 00:17:05.877 { 00:17:05.877 "name": "spare", 00:17:05.877 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:05.877 "is_configured": true, 00:17:05.877 "data_offset": 2048, 00:17:05.877 "data_size": 63488 00:17:05.877 }, 00:17:05.877 { 00:17:05.877 "name": null, 00:17:05.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.877 "is_configured": false, 00:17:05.877 "data_offset": 0, 00:17:05.877 "data_size": 63488 00:17:05.877 }, 00:17:05.877 { 00:17:05.877 "name": "BaseBdev3", 00:17:05.877 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:05.877 "is_configured": true, 00:17:05.877 "data_offset": 2048, 00:17:05.877 "data_size": 63488 00:17:05.877 }, 00:17:05.877 { 00:17:05.877 "name": "BaseBdev4", 00:17:05.877 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:05.877 "is_configured": true, 00:17:05.877 "data_offset": 2048, 00:17:05.877 "data_size": 63488 00:17:05.877 } 00:17:05.877 ] 00:17:05.877 }' 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=510 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.877 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.877 "name": "raid_bdev1", 00:17:05.877 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:05.877 "strip_size_kb": 0, 00:17:05.877 "state": "online", 00:17:05.877 "raid_level": "raid1", 00:17:05.877 "superblock": true, 00:17:05.877 "num_base_bdevs": 4, 00:17:05.877 "num_base_bdevs_discovered": 3, 00:17:05.877 "num_base_bdevs_operational": 3, 00:17:05.877 "process": { 00:17:05.877 "type": "rebuild", 00:17:05.877 "target": "spare", 00:17:05.877 "progress": { 00:17:05.877 "blocks": 16384, 00:17:05.877 "percent": 25 00:17:05.877 } 00:17:05.877 }, 00:17:05.877 "base_bdevs_list": [ 00:17:05.877 { 00:17:05.877 "name": "spare", 00:17:05.877 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:05.877 "is_configured": true, 00:17:05.877 "data_offset": 2048, 00:17:05.877 "data_size": 63488 00:17:05.877 }, 00:17:05.877 { 00:17:05.877 "name": null, 00:17:05.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.877 "is_configured": false, 00:17:05.877 "data_offset": 0, 00:17:05.877 "data_size": 63488 00:17:05.877 }, 00:17:05.877 { 00:17:05.877 "name": "BaseBdev3", 00:17:05.877 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:05.877 "is_configured": true, 00:17:05.877 "data_offset": 2048, 00:17:05.877 "data_size": 63488 00:17:05.877 }, 00:17:05.877 { 00:17:05.877 "name": "BaseBdev4", 00:17:05.877 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:05.877 "is_configured": true, 00:17:05.878 "data_offset": 2048, 00:17:05.878 "data_size": 63488 00:17:05.878 } 00:17:05.878 ] 00:17:05.878 }' 00:17:05.878 13:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.878 13:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.878 13:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.878 13:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.878 13:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.137 121.25 IOPS, 363.75 MiB/s [2024-10-01 13:51:16.178949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:06.397 [2024-10-01 13:51:16.423094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:06.970 110.60 IOPS, 331.80 MiB/s 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.970 "name": "raid_bdev1", 00:17:06.970 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:06.970 "strip_size_kb": 0, 00:17:06.970 "state": "online", 00:17:06.970 "raid_level": "raid1", 00:17:06.970 "superblock": true, 00:17:06.970 "num_base_bdevs": 4, 00:17:06.970 "num_base_bdevs_discovered": 3, 00:17:06.970 "num_base_bdevs_operational": 3, 00:17:06.970 "process": { 00:17:06.970 "type": "rebuild", 00:17:06.970 "target": "spare", 00:17:06.970 "progress": { 00:17:06.970 "blocks": 32768, 00:17:06.970 "percent": 51 00:17:06.970 } 00:17:06.970 }, 00:17:06.970 "base_bdevs_list": [ 00:17:06.970 { 00:17:06.970 "name": "spare", 00:17:06.970 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:06.970 "is_configured": true, 00:17:06.970 "data_offset": 2048, 00:17:06.970 "data_size": 63488 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "name": null, 00:17:06.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.970 "is_configured": false, 00:17:06.970 "data_offset": 0, 00:17:06.970 "data_size": 63488 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "name": "BaseBdev3", 00:17:06.970 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:06.970 "is_configured": true, 00:17:06.970 "data_offset": 2048, 00:17:06.970 "data_size": 63488 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "name": "BaseBdev4", 00:17:06.970 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:06.970 "is_configured": true, 00:17:06.970 "data_offset": 2048, 00:17:06.970 "data_size": 63488 00:17:06.970 } 00:17:06.970 ] 00:17:06.970 }' 00:17:06.970 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.970 [2024-10-01 13:51:17.147549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:06.970 [2024-10-01 13:51:17.148091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:07.229 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.229 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.229 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.229 13:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.797 [2024-10-01 13:51:17.834974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:07.797 [2024-10-01 13:51:17.951852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:08.055 100.33 IOPS, 301.00 MiB/s 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.055 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.314 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.314 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.314 "name": "raid_bdev1", 00:17:08.314 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:08.314 "strip_size_kb": 0, 00:17:08.314 "state": "online", 00:17:08.314 "raid_level": "raid1", 00:17:08.314 "superblock": true, 00:17:08.314 "num_base_bdevs": 4, 00:17:08.314 "num_base_bdevs_discovered": 3, 00:17:08.314 "num_base_bdevs_operational": 3, 00:17:08.314 "process": { 00:17:08.314 "type": "rebuild", 00:17:08.314 "target": "spare", 00:17:08.314 "progress": { 00:17:08.314 "blocks": 49152, 00:17:08.314 "percent": 77 00:17:08.314 } 00:17:08.314 }, 00:17:08.314 "base_bdevs_list": [ 00:17:08.314 { 00:17:08.314 "name": "spare", 00:17:08.314 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:08.314 "is_configured": true, 00:17:08.314 "data_offset": 2048, 00:17:08.314 "data_size": 63488 00:17:08.314 }, 00:17:08.314 { 00:17:08.314 "name": null, 00:17:08.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.314 "is_configured": false, 00:17:08.314 "data_offset": 0, 00:17:08.314 "data_size": 63488 00:17:08.314 }, 00:17:08.314 { 00:17:08.314 "name": "BaseBdev3", 00:17:08.314 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:08.314 "is_configured": true, 00:17:08.314 "data_offset": 2048, 00:17:08.314 "data_size": 63488 00:17:08.314 }, 00:17:08.314 { 00:17:08.314 "name": "BaseBdev4", 00:17:08.314 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:08.314 "is_configured": true, 00:17:08.314 "data_offset": 2048, 00:17:08.314 "data_size": 63488 00:17:08.314 } 00:17:08.314 ] 00:17:08.314 }' 00:17:08.314 [2024-10-01 13:51:18.278269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:08.314 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.314 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.314 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.314 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.314 13:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.314 [2024-10-01 13:51:18.495653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:08.881 [2024-10-01 13:51:18.828364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:08.881 [2024-10-01 13:51:19.043989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:17:09.448 90.86 IOPS, 272.57 MiB/s [2024-10-01 13:51:19.378613] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.448 "name": "raid_bdev1", 00:17:09.448 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:09.448 "strip_size_kb": 0, 00:17:09.448 "state": "online", 00:17:09.448 "raid_level": "raid1", 00:17:09.448 "superblock": true, 00:17:09.448 "num_base_bdevs": 4, 00:17:09.448 "num_base_bdevs_discovered": 3, 00:17:09.448 "num_base_bdevs_operational": 3, 00:17:09.448 "process": { 00:17:09.448 "type": "rebuild", 00:17:09.448 "target": "spare", 00:17:09.448 "progress": { 00:17:09.448 "blocks": 63488, 00:17:09.448 "percent": 100 00:17:09.448 } 00:17:09.448 }, 00:17:09.448 "base_bdevs_list": [ 00:17:09.448 { 00:17:09.448 "name": "spare", 00:17:09.448 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:09.448 "is_configured": true, 00:17:09.448 "data_offset": 2048, 00:17:09.448 "data_size": 63488 00:17:09.448 }, 00:17:09.448 { 00:17:09.448 "name": null, 00:17:09.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.448 "is_configured": false, 00:17:09.448 "data_offset": 0, 00:17:09.448 "data_size": 63488 00:17:09.448 }, 00:17:09.448 { 00:17:09.448 "name": "BaseBdev3", 00:17:09.448 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:09.448 "is_configured": true, 00:17:09.448 "data_offset": 2048, 00:17:09.448 "data_size": 63488 00:17:09.448 }, 00:17:09.448 { 00:17:09.448 "name": "BaseBdev4", 00:17:09.448 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:09.448 "is_configured": true, 00:17:09.448 "data_offset": 2048, 00:17:09.448 "data_size": 63488 00:17:09.448 } 00:17:09.448 ] 00:17:09.448 }' 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.448 [2024-10-01 13:51:19.478429] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:09.448 [2024-10-01 13:51:19.480391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.448 13:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.583 86.00 IOPS, 258.00 MiB/s 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.583 "name": "raid_bdev1", 00:17:10.583 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:10.583 "strip_size_kb": 0, 00:17:10.583 "state": "online", 00:17:10.583 "raid_level": "raid1", 00:17:10.583 "superblock": true, 00:17:10.583 "num_base_bdevs": 4, 00:17:10.583 "num_base_bdevs_discovered": 3, 00:17:10.583 "num_base_bdevs_operational": 3, 00:17:10.583 "base_bdevs_list": [ 00:17:10.583 { 00:17:10.583 "name": "spare", 00:17:10.583 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:10.583 "is_configured": true, 00:17:10.583 "data_offset": 2048, 00:17:10.583 "data_size": 63488 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "name": null, 00:17:10.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.583 "is_configured": false, 00:17:10.583 "data_offset": 0, 00:17:10.583 "data_size": 63488 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "name": "BaseBdev3", 00:17:10.583 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:10.583 "is_configured": true, 00:17:10.583 "data_offset": 2048, 00:17:10.583 "data_size": 63488 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "name": "BaseBdev4", 00:17:10.583 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:10.583 "is_configured": true, 00:17:10.583 "data_offset": 2048, 00:17:10.583 "data_size": 63488 00:17:10.583 } 00:17:10.583 ] 00:17:10.583 }' 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.583 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.584 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.584 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.584 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.584 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.584 "name": "raid_bdev1", 00:17:10.584 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:10.584 "strip_size_kb": 0, 00:17:10.584 "state": "online", 00:17:10.584 "raid_level": "raid1", 00:17:10.584 "superblock": true, 00:17:10.584 "num_base_bdevs": 4, 00:17:10.584 "num_base_bdevs_discovered": 3, 00:17:10.584 "num_base_bdevs_operational": 3, 00:17:10.584 "base_bdevs_list": [ 00:17:10.584 { 00:17:10.584 "name": "spare", 00:17:10.584 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:10.584 "is_configured": true, 00:17:10.584 "data_offset": 2048, 00:17:10.584 "data_size": 63488 00:17:10.584 }, 00:17:10.584 { 00:17:10.584 "name": null, 00:17:10.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.584 "is_configured": false, 00:17:10.584 "data_offset": 0, 00:17:10.584 "data_size": 63488 00:17:10.584 }, 00:17:10.584 { 00:17:10.584 "name": "BaseBdev3", 00:17:10.584 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:10.584 "is_configured": true, 00:17:10.584 "data_offset": 2048, 00:17:10.584 "data_size": 63488 00:17:10.584 }, 00:17:10.584 { 00:17:10.584 "name": "BaseBdev4", 00:17:10.584 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:10.584 "is_configured": true, 00:17:10.584 "data_offset": 2048, 00:17:10.584 "data_size": 63488 00:17:10.584 } 00:17:10.584 ] 00:17:10.584 }' 00:17:10.584 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.584 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.584 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.843 "name": "raid_bdev1", 00:17:10.843 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:10.843 "strip_size_kb": 0, 00:17:10.843 "state": "online", 00:17:10.843 "raid_level": "raid1", 00:17:10.843 "superblock": true, 00:17:10.843 "num_base_bdevs": 4, 00:17:10.843 "num_base_bdevs_discovered": 3, 00:17:10.843 "num_base_bdevs_operational": 3, 00:17:10.843 "base_bdevs_list": [ 00:17:10.843 { 00:17:10.843 "name": "spare", 00:17:10.843 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:10.843 "is_configured": true, 00:17:10.843 "data_offset": 2048, 00:17:10.843 "data_size": 63488 00:17:10.843 }, 00:17:10.843 { 00:17:10.843 "name": null, 00:17:10.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.843 "is_configured": false, 00:17:10.843 "data_offset": 0, 00:17:10.843 "data_size": 63488 00:17:10.843 }, 00:17:10.843 { 00:17:10.843 "name": "BaseBdev3", 00:17:10.843 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:10.843 "is_configured": true, 00:17:10.843 "data_offset": 2048, 00:17:10.843 "data_size": 63488 00:17:10.843 }, 00:17:10.843 { 00:17:10.843 "name": "BaseBdev4", 00:17:10.843 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:10.843 "is_configured": true, 00:17:10.843 "data_offset": 2048, 00:17:10.843 "data_size": 63488 00:17:10.843 } 00:17:10.843 ] 00:17:10.843 }' 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.843 13:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.103 80.11 IOPS, 240.33 MiB/s 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.103 [2024-10-01 13:51:21.209104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.103 [2024-10-01 13:51:21.209140] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.103 00:17:11.103 Latency(us) 00:17:11.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.103 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:11.103 raid_bdev1 : 9.21 78.74 236.22 0.00 0.00 18267.72 315.84 112858.78 00:17:11.103 =================================================================================================================== 00:17:11.103 Total : 78.74 236.22 0.00 0.00 18267.72 315.84 112858.78 00:17:11.103 [2024-10-01 13:51:21.268764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.103 [2024-10-01 13:51:21.268825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.103 [2024-10-01 13:51:21.268933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.103 [2024-10-01 13:51:21.268945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:11.103 { 00:17:11.103 "results": [ 00:17:11.103 { 00:17:11.103 "job": "raid_bdev1", 00:17:11.103 "core_mask": "0x1", 00:17:11.103 "workload": "randrw", 00:17:11.103 "percentage": 50, 00:17:11.103 "status": "finished", 00:17:11.103 "queue_depth": 2, 00:17:11.103 "io_size": 3145728, 00:17:11.103 "runtime": 9.207531, 00:17:11.103 "iops": 78.73989237722903, 00:17:11.103 "mibps": 236.2196771316871, 00:17:11.103 "io_failed": 0, 00:17:11.103 "io_timeout": 0, 00:17:11.103 "avg_latency_us": 18267.71528818723, 00:17:11.103 "min_latency_us": 315.8361445783133, 00:17:11.103 "max_latency_us": 112858.78232931727 00:17:11.103 } 00:17:11.103 ], 00:17:11.103 "core_count": 1 00:17:11.103 } 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:11.103 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.365 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:11.624 /dev/nbd0 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.624 1+0 records in 00:17:11.624 1+0 records out 00:17:11.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449098 s, 9.1 MB/s 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.624 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:11.885 /dev/nbd1 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.885 1+0 records in 00:17:11.885 1+0 records out 00:17:11.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477041 s, 8.6 MB/s 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.885 13:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.145 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:12.405 /dev/nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:12.405 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.664 1+0 records in 00:17:12.664 1+0 records out 00:17:12.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451361 s, 9.1 MB/s 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.664 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.923 13:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.182 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.182 [2024-10-01 13:51:23.207485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.182 [2024-10-01 13:51:23.207565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.183 [2024-10-01 13:51:23.207592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:13.183 [2024-10-01 13:51:23.207605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.183 [2024-10-01 13:51:23.210300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.183 [2024-10-01 13:51:23.210343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.183 [2024-10-01 13:51:23.210460] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.183 [2024-10-01 13:51:23.210525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.183 [2024-10-01 13:51:23.210675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.183 [2024-10-01 13:51:23.210782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:13.183 spare 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.183 [2024-10-01 13:51:23.310727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:13.183 [2024-10-01 13:51:23.310791] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:13.183 [2024-10-01 13:51:23.311192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:13.183 [2024-10-01 13:51:23.311396] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:13.183 [2024-10-01 13:51:23.311434] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:13.183 [2024-10-01 13:51:23.311665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.183 "name": "raid_bdev1", 00:17:13.183 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:13.183 "strip_size_kb": 0, 00:17:13.183 "state": "online", 00:17:13.183 "raid_level": "raid1", 00:17:13.183 "superblock": true, 00:17:13.183 "num_base_bdevs": 4, 00:17:13.183 "num_base_bdevs_discovered": 3, 00:17:13.183 "num_base_bdevs_operational": 3, 00:17:13.183 "base_bdevs_list": [ 00:17:13.183 { 00:17:13.183 "name": "spare", 00:17:13.183 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:13.183 "is_configured": true, 00:17:13.183 "data_offset": 2048, 00:17:13.183 "data_size": 63488 00:17:13.183 }, 00:17:13.183 { 00:17:13.183 "name": null, 00:17:13.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.183 "is_configured": false, 00:17:13.183 "data_offset": 2048, 00:17:13.183 "data_size": 63488 00:17:13.183 }, 00:17:13.183 { 00:17:13.183 "name": "BaseBdev3", 00:17:13.183 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:13.183 "is_configured": true, 00:17:13.183 "data_offset": 2048, 00:17:13.183 "data_size": 63488 00:17:13.183 }, 00:17:13.183 { 00:17:13.183 "name": "BaseBdev4", 00:17:13.183 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:13.183 "is_configured": true, 00:17:13.183 "data_offset": 2048, 00:17:13.183 "data_size": 63488 00:17:13.183 } 00:17:13.183 ] 00:17:13.183 }' 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.183 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.752 "name": "raid_bdev1", 00:17:13.752 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:13.752 "strip_size_kb": 0, 00:17:13.752 "state": "online", 00:17:13.752 "raid_level": "raid1", 00:17:13.752 "superblock": true, 00:17:13.752 "num_base_bdevs": 4, 00:17:13.752 "num_base_bdevs_discovered": 3, 00:17:13.752 "num_base_bdevs_operational": 3, 00:17:13.752 "base_bdevs_list": [ 00:17:13.752 { 00:17:13.752 "name": "spare", 00:17:13.752 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:13.752 "is_configured": true, 00:17:13.752 "data_offset": 2048, 00:17:13.752 "data_size": 63488 00:17:13.752 }, 00:17:13.752 { 00:17:13.752 "name": null, 00:17:13.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.752 "is_configured": false, 00:17:13.752 "data_offset": 2048, 00:17:13.752 "data_size": 63488 00:17:13.752 }, 00:17:13.752 { 00:17:13.752 "name": "BaseBdev3", 00:17:13.752 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:13.752 "is_configured": true, 00:17:13.752 "data_offset": 2048, 00:17:13.752 "data_size": 63488 00:17:13.752 }, 00:17:13.752 { 00:17:13.752 "name": "BaseBdev4", 00:17:13.752 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:13.752 "is_configured": true, 00:17:13.752 "data_offset": 2048, 00:17:13.752 "data_size": 63488 00:17:13.752 } 00:17:13.752 ] 00:17:13.752 }' 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.752 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.011 [2024-10-01 13:51:23.943606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.011 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.011 "name": "raid_bdev1", 00:17:14.011 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:14.011 "strip_size_kb": 0, 00:17:14.011 "state": "online", 00:17:14.011 "raid_level": "raid1", 00:17:14.011 "superblock": true, 00:17:14.011 "num_base_bdevs": 4, 00:17:14.011 "num_base_bdevs_discovered": 2, 00:17:14.011 "num_base_bdevs_operational": 2, 00:17:14.012 "base_bdevs_list": [ 00:17:14.012 { 00:17:14.012 "name": null, 00:17:14.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.012 "is_configured": false, 00:17:14.012 "data_offset": 0, 00:17:14.012 "data_size": 63488 00:17:14.012 }, 00:17:14.012 { 00:17:14.012 "name": null, 00:17:14.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.012 "is_configured": false, 00:17:14.012 "data_offset": 2048, 00:17:14.012 "data_size": 63488 00:17:14.012 }, 00:17:14.012 { 00:17:14.012 "name": "BaseBdev3", 00:17:14.012 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:14.012 "is_configured": true, 00:17:14.012 "data_offset": 2048, 00:17:14.012 "data_size": 63488 00:17:14.012 }, 00:17:14.012 { 00:17:14.012 "name": "BaseBdev4", 00:17:14.012 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:14.012 "is_configured": true, 00:17:14.012 "data_offset": 2048, 00:17:14.012 "data_size": 63488 00:17:14.012 } 00:17:14.012 ] 00:17:14.012 }' 00:17:14.012 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.012 13:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.271 13:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.271 13:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.271 13:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.271 [2024-10-01 13:51:24.395639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.271 [2024-10-01 13:51:24.395854] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:14.271 [2024-10-01 13:51:24.395872] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.271 [2024-10-01 13:51:24.395919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.271 [2024-10-01 13:51:24.410913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:14.271 [2024-10-01 13:51:24.413263] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.271 13:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.271 13:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.671 "name": "raid_bdev1", 00:17:15.671 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:15.671 "strip_size_kb": 0, 00:17:15.671 "state": "online", 00:17:15.671 "raid_level": "raid1", 00:17:15.671 "superblock": true, 00:17:15.671 "num_base_bdevs": 4, 00:17:15.671 "num_base_bdevs_discovered": 3, 00:17:15.671 "num_base_bdevs_operational": 3, 00:17:15.671 "process": { 00:17:15.671 "type": "rebuild", 00:17:15.671 "target": "spare", 00:17:15.671 "progress": { 00:17:15.671 "blocks": 20480, 00:17:15.671 "percent": 32 00:17:15.671 } 00:17:15.671 }, 00:17:15.671 "base_bdevs_list": [ 00:17:15.671 { 00:17:15.671 "name": "spare", 00:17:15.671 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:15.671 "is_configured": true, 00:17:15.671 "data_offset": 2048, 00:17:15.671 "data_size": 63488 00:17:15.671 }, 00:17:15.671 { 00:17:15.671 "name": null, 00:17:15.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.671 "is_configured": false, 00:17:15.671 "data_offset": 2048, 00:17:15.671 "data_size": 63488 00:17:15.671 }, 00:17:15.671 { 00:17:15.671 "name": "BaseBdev3", 00:17:15.671 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:15.671 "is_configured": true, 00:17:15.671 "data_offset": 2048, 00:17:15.671 "data_size": 63488 00:17:15.671 }, 00:17:15.671 { 00:17:15.671 "name": "BaseBdev4", 00:17:15.671 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:15.671 "is_configured": true, 00:17:15.671 "data_offset": 2048, 00:17:15.671 "data_size": 63488 00:17:15.671 } 00:17:15.671 ] 00:17:15.671 }' 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.671 [2024-10-01 13:51:25.573083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.671 [2024-10-01 13:51:25.619352] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.671 [2024-10-01 13:51:25.619437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.671 [2024-10-01 13:51:25.619468] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.671 [2024-10-01 13:51:25.619478] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.671 "name": "raid_bdev1", 00:17:15.671 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:15.671 "strip_size_kb": 0, 00:17:15.671 "state": "online", 00:17:15.671 "raid_level": "raid1", 00:17:15.671 "superblock": true, 00:17:15.671 "num_base_bdevs": 4, 00:17:15.671 "num_base_bdevs_discovered": 2, 00:17:15.671 "num_base_bdevs_operational": 2, 00:17:15.671 "base_bdevs_list": [ 00:17:15.671 { 00:17:15.671 "name": null, 00:17:15.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.671 "is_configured": false, 00:17:15.671 "data_offset": 0, 00:17:15.671 "data_size": 63488 00:17:15.671 }, 00:17:15.671 { 00:17:15.671 "name": null, 00:17:15.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.671 "is_configured": false, 00:17:15.671 "data_offset": 2048, 00:17:15.671 "data_size": 63488 00:17:15.671 }, 00:17:15.671 { 00:17:15.671 "name": "BaseBdev3", 00:17:15.671 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:15.671 "is_configured": true, 00:17:15.671 "data_offset": 2048, 00:17:15.671 "data_size": 63488 00:17:15.671 }, 00:17:15.671 { 00:17:15.671 "name": "BaseBdev4", 00:17:15.671 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:15.671 "is_configured": true, 00:17:15.671 "data_offset": 2048, 00:17:15.671 "data_size": 63488 00:17:15.671 } 00:17:15.671 ] 00:17:15.671 }' 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.671 13:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.930 13:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.930 13:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.930 13:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.930 [2024-10-01 13:51:26.116447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.930 [2024-10-01 13:51:26.116521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.930 [2024-10-01 13:51:26.116557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:15.930 [2024-10-01 13:51:26.116571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.930 [2024-10-01 13:51:26.117141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.930 [2024-10-01 13:51:26.117172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.930 [2024-10-01 13:51:26.117281] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.930 [2024-10-01 13:51:26.117295] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:15.930 [2024-10-01 13:51:26.117310] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:15.930 [2024-10-01 13:51:26.117333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.194 [2024-10-01 13:51:26.132447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:16.194 spare 00:17:16.194 13:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.194 [2024-10-01 13:51:26.134677] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.194 13:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.131 "name": "raid_bdev1", 00:17:17.131 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:17.131 "strip_size_kb": 0, 00:17:17.131 "state": "online", 00:17:17.131 "raid_level": "raid1", 00:17:17.131 "superblock": true, 00:17:17.131 "num_base_bdevs": 4, 00:17:17.131 "num_base_bdevs_discovered": 3, 00:17:17.131 "num_base_bdevs_operational": 3, 00:17:17.131 "process": { 00:17:17.131 "type": "rebuild", 00:17:17.131 "target": "spare", 00:17:17.131 "progress": { 00:17:17.131 "blocks": 20480, 00:17:17.131 "percent": 32 00:17:17.131 } 00:17:17.131 }, 00:17:17.131 "base_bdevs_list": [ 00:17:17.131 { 00:17:17.131 "name": "spare", 00:17:17.131 "uuid": "bdc153b3-8272-55ae-8247-d8736db7664c", 00:17:17.131 "is_configured": true, 00:17:17.131 "data_offset": 2048, 00:17:17.131 "data_size": 63488 00:17:17.131 }, 00:17:17.131 { 00:17:17.131 "name": null, 00:17:17.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.131 "is_configured": false, 00:17:17.131 "data_offset": 2048, 00:17:17.131 "data_size": 63488 00:17:17.131 }, 00:17:17.131 { 00:17:17.131 "name": "BaseBdev3", 00:17:17.131 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:17.131 "is_configured": true, 00:17:17.131 "data_offset": 2048, 00:17:17.131 "data_size": 63488 00:17:17.131 }, 00:17:17.131 { 00:17:17.131 "name": "BaseBdev4", 00:17:17.131 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:17.131 "is_configured": true, 00:17:17.131 "data_offset": 2048, 00:17:17.131 "data_size": 63488 00:17:17.131 } 00:17:17.131 ] 00:17:17.131 }' 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.131 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.131 [2024-10-01 13:51:27.274474] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.390 [2024-10-01 13:51:27.340597] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.390 [2024-10-01 13:51:27.340699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.390 [2024-10-01 13:51:27.340719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.390 [2024-10-01 13:51:27.340746] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.390 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.390 "name": "raid_bdev1", 00:17:17.390 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:17.390 "strip_size_kb": 0, 00:17:17.390 "state": "online", 00:17:17.390 "raid_level": "raid1", 00:17:17.390 "superblock": true, 00:17:17.390 "num_base_bdevs": 4, 00:17:17.390 "num_base_bdevs_discovered": 2, 00:17:17.390 "num_base_bdevs_operational": 2, 00:17:17.390 "base_bdevs_list": [ 00:17:17.390 { 00:17:17.390 "name": null, 00:17:17.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.390 "is_configured": false, 00:17:17.390 "data_offset": 0, 00:17:17.390 "data_size": 63488 00:17:17.390 }, 00:17:17.390 { 00:17:17.390 "name": null, 00:17:17.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.390 "is_configured": false, 00:17:17.390 "data_offset": 2048, 00:17:17.390 "data_size": 63488 00:17:17.390 }, 00:17:17.390 { 00:17:17.390 "name": "BaseBdev3", 00:17:17.390 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:17.390 "is_configured": true, 00:17:17.390 "data_offset": 2048, 00:17:17.390 "data_size": 63488 00:17:17.390 }, 00:17:17.390 { 00:17:17.390 "name": "BaseBdev4", 00:17:17.391 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:17.391 "is_configured": true, 00:17:17.391 "data_offset": 2048, 00:17:17.391 "data_size": 63488 00:17:17.391 } 00:17:17.391 ] 00:17:17.391 }' 00:17:17.391 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.391 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.649 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.650 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.909 "name": "raid_bdev1", 00:17:17.909 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:17.909 "strip_size_kb": 0, 00:17:17.909 "state": "online", 00:17:17.909 "raid_level": "raid1", 00:17:17.909 "superblock": true, 00:17:17.909 "num_base_bdevs": 4, 00:17:17.909 "num_base_bdevs_discovered": 2, 00:17:17.909 "num_base_bdevs_operational": 2, 00:17:17.909 "base_bdevs_list": [ 00:17:17.909 { 00:17:17.909 "name": null, 00:17:17.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.909 "is_configured": false, 00:17:17.909 "data_offset": 0, 00:17:17.909 "data_size": 63488 00:17:17.909 }, 00:17:17.909 { 00:17:17.909 "name": null, 00:17:17.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.909 "is_configured": false, 00:17:17.909 "data_offset": 2048, 00:17:17.909 "data_size": 63488 00:17:17.909 }, 00:17:17.909 { 00:17:17.909 "name": "BaseBdev3", 00:17:17.909 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:17.909 "is_configured": true, 00:17:17.909 "data_offset": 2048, 00:17:17.909 "data_size": 63488 00:17:17.909 }, 00:17:17.909 { 00:17:17.909 "name": "BaseBdev4", 00:17:17.909 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:17.909 "is_configured": true, 00:17:17.909 "data_offset": 2048, 00:17:17.909 "data_size": 63488 00:17:17.909 } 00:17:17.909 ] 00:17:17.909 }' 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.909 [2024-10-01 13:51:27.973741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.909 [2024-10-01 13:51:27.973811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.909 [2024-10-01 13:51:27.973833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:17.909 [2024-10-01 13:51:27.973848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.909 [2024-10-01 13:51:27.974329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.909 [2024-10-01 13:51:27.974361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.909 [2024-10-01 13:51:27.974459] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:17.909 [2024-10-01 13:51:27.974480] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:17.909 [2024-10-01 13:51:27.974490] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.909 [2024-10-01 13:51:27.974504] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:17.909 BaseBdev1 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.909 13:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:18.842 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.842 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.842 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.842 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.842 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.843 13:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.843 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.843 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.843 "name": "raid_bdev1", 00:17:18.843 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:18.843 "strip_size_kb": 0, 00:17:18.843 "state": "online", 00:17:18.843 "raid_level": "raid1", 00:17:18.843 "superblock": true, 00:17:18.843 "num_base_bdevs": 4, 00:17:18.843 "num_base_bdevs_discovered": 2, 00:17:18.843 "num_base_bdevs_operational": 2, 00:17:18.843 "base_bdevs_list": [ 00:17:18.843 { 00:17:18.843 "name": null, 00:17:18.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.843 "is_configured": false, 00:17:18.843 "data_offset": 0, 00:17:18.843 "data_size": 63488 00:17:18.843 }, 00:17:18.843 { 00:17:18.843 "name": null, 00:17:18.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.843 "is_configured": false, 00:17:18.843 "data_offset": 2048, 00:17:18.843 "data_size": 63488 00:17:18.843 }, 00:17:18.843 { 00:17:18.843 "name": "BaseBdev3", 00:17:18.843 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:18.843 "is_configured": true, 00:17:18.843 "data_offset": 2048, 00:17:18.843 "data_size": 63488 00:17:18.843 }, 00:17:18.843 { 00:17:18.843 "name": "BaseBdev4", 00:17:18.843 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:18.843 "is_configured": true, 00:17:18.843 "data_offset": 2048, 00:17:18.843 "data_size": 63488 00:17:18.843 } 00:17:18.843 ] 00:17:18.843 }' 00:17:18.843 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.843 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.420 "name": "raid_bdev1", 00:17:19.420 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:19.420 "strip_size_kb": 0, 00:17:19.420 "state": "online", 00:17:19.420 "raid_level": "raid1", 00:17:19.420 "superblock": true, 00:17:19.420 "num_base_bdevs": 4, 00:17:19.420 "num_base_bdevs_discovered": 2, 00:17:19.420 "num_base_bdevs_operational": 2, 00:17:19.420 "base_bdevs_list": [ 00:17:19.420 { 00:17:19.420 "name": null, 00:17:19.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.420 "is_configured": false, 00:17:19.420 "data_offset": 0, 00:17:19.420 "data_size": 63488 00:17:19.420 }, 00:17:19.420 { 00:17:19.420 "name": null, 00:17:19.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.420 "is_configured": false, 00:17:19.420 "data_offset": 2048, 00:17:19.420 "data_size": 63488 00:17:19.420 }, 00:17:19.420 { 00:17:19.420 "name": "BaseBdev3", 00:17:19.420 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:19.420 "is_configured": true, 00:17:19.420 "data_offset": 2048, 00:17:19.420 "data_size": 63488 00:17:19.420 }, 00:17:19.420 { 00:17:19.420 "name": "BaseBdev4", 00:17:19.420 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:19.420 "is_configured": true, 00:17:19.420 "data_offset": 2048, 00:17:19.420 "data_size": 63488 00:17:19.420 } 00:17:19.420 ] 00:17:19.420 }' 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.420 [2024-10-01 13:51:29.531743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.420 [2024-10-01 13:51:29.531943] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:19.420 [2024-10-01 13:51:29.531960] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.420 request: 00:17:19.420 { 00:17:19.420 "base_bdev": "BaseBdev1", 00:17:19.420 "raid_bdev": "raid_bdev1", 00:17:19.420 "method": "bdev_raid_add_base_bdev", 00:17:19.420 "req_id": 1 00:17:19.420 } 00:17:19.420 Got JSON-RPC error response 00:17:19.420 response: 00:17:19.420 { 00:17:19.420 "code": -22, 00:17:19.420 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.420 } 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.420 13:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.368 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.625 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.625 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.625 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.625 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.625 "name": "raid_bdev1", 00:17:20.625 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:20.625 "strip_size_kb": 0, 00:17:20.625 "state": "online", 00:17:20.625 "raid_level": "raid1", 00:17:20.625 "superblock": true, 00:17:20.625 "num_base_bdevs": 4, 00:17:20.625 "num_base_bdevs_discovered": 2, 00:17:20.625 "num_base_bdevs_operational": 2, 00:17:20.625 "base_bdevs_list": [ 00:17:20.625 { 00:17:20.625 "name": null, 00:17:20.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.625 "is_configured": false, 00:17:20.625 "data_offset": 0, 00:17:20.625 "data_size": 63488 00:17:20.625 }, 00:17:20.625 { 00:17:20.625 "name": null, 00:17:20.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.625 "is_configured": false, 00:17:20.625 "data_offset": 2048, 00:17:20.625 "data_size": 63488 00:17:20.625 }, 00:17:20.625 { 00:17:20.625 "name": "BaseBdev3", 00:17:20.625 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:20.625 "is_configured": true, 00:17:20.625 "data_offset": 2048, 00:17:20.625 "data_size": 63488 00:17:20.625 }, 00:17:20.625 { 00:17:20.625 "name": "BaseBdev4", 00:17:20.625 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:20.625 "is_configured": true, 00:17:20.625 "data_offset": 2048, 00:17:20.625 "data_size": 63488 00:17:20.625 } 00:17:20.625 ] 00:17:20.625 }' 00:17:20.625 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.625 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.884 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.884 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.884 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.884 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.884 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.884 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.884 13:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.884 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.884 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.884 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.884 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.884 "name": "raid_bdev1", 00:17:20.884 "uuid": "2ea73cb7-06d6-4572-93ce-90dc37aa52a2", 00:17:20.884 "strip_size_kb": 0, 00:17:20.884 "state": "online", 00:17:20.884 "raid_level": "raid1", 00:17:20.884 "superblock": true, 00:17:20.884 "num_base_bdevs": 4, 00:17:20.884 "num_base_bdevs_discovered": 2, 00:17:20.884 "num_base_bdevs_operational": 2, 00:17:20.884 "base_bdevs_list": [ 00:17:20.884 { 00:17:20.884 "name": null, 00:17:20.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.884 "is_configured": false, 00:17:20.884 "data_offset": 0, 00:17:20.884 "data_size": 63488 00:17:20.884 }, 00:17:20.884 { 00:17:20.884 "name": null, 00:17:20.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.884 "is_configured": false, 00:17:20.884 "data_offset": 2048, 00:17:20.884 "data_size": 63488 00:17:20.884 }, 00:17:20.884 { 00:17:20.884 "name": "BaseBdev3", 00:17:20.884 "uuid": "da0f7d3e-b774-515a-8e73-fc1b61f2fd9a", 00:17:20.884 "is_configured": true, 00:17:20.884 "data_offset": 2048, 00:17:20.884 "data_size": 63488 00:17:20.884 }, 00:17:20.884 { 00:17:20.884 "name": "BaseBdev4", 00:17:20.884 "uuid": "edce8862-4e1d-57a2-8695-602f70e64011", 00:17:20.884 "is_configured": true, 00:17:20.884 "data_offset": 2048, 00:17:20.884 "data_size": 63488 00:17:20.884 } 00:17:20.884 ] 00:17:20.884 }' 00:17:20.884 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79168 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79168 ']' 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79168 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79168 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.143 killing process with pid 79168 00:17:21.143 Received shutdown signal, test time was about 19.168694 seconds 00:17:21.143 00:17:21.143 Latency(us) 00:17:21.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.143 =================================================================================================================== 00:17:21.143 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79168' 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79168 00:17:21.143 13:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79168 00:17:21.143 [2024-10-01 13:51:31.189971] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.143 [2024-10-01 13:51:31.190096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.143 [2024-10-01 13:51:31.190169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.143 [2024-10-01 13:51:31.190186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.710 [2024-10-01 13:51:31.630605] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.084 13:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:23.084 00:17:23.084 real 0m22.803s 00:17:23.084 user 0m29.469s 00:17:23.084 sys 0m3.045s 00:17:23.084 13:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.084 13:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.084 ************************************ 00:17:23.084 END TEST raid_rebuild_test_sb_io 00:17:23.084 ************************************ 00:17:23.084 13:51:33 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:23.084 13:51:33 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:17:23.084 13:51:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:23.084 13:51:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.084 13:51:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.084 ************************************ 00:17:23.084 START TEST raid5f_state_function_test 00:17:23.084 ************************************ 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:23.084 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:23.085 Process raid pid: 79915 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79915 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79915' 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79915 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79915 ']' 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.085 13:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.085 [2024-10-01 13:51:33.223815] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:23.085 [2024-10-01 13:51:33.223963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.343 [2024-10-01 13:51:33.400291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.600 [2024-10-01 13:51:33.628042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.859 [2024-10-01 13:51:33.852672] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.859 [2024-10-01 13:51:33.852719] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.117 [2024-10-01 13:51:34.101718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.117 [2024-10-01 13:51:34.101790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.117 [2024-10-01 13:51:34.101807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.117 [2024-10-01 13:51:34.101822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.117 [2024-10-01 13:51:34.101830] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.117 [2024-10-01 13:51:34.101844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.117 "name": "Existed_Raid", 00:17:24.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.117 "strip_size_kb": 64, 00:17:24.117 "state": "configuring", 00:17:24.117 "raid_level": "raid5f", 00:17:24.117 "superblock": false, 00:17:24.117 "num_base_bdevs": 3, 00:17:24.117 "num_base_bdevs_discovered": 0, 00:17:24.117 "num_base_bdevs_operational": 3, 00:17:24.117 "base_bdevs_list": [ 00:17:24.117 { 00:17:24.117 "name": "BaseBdev1", 00:17:24.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.117 "is_configured": false, 00:17:24.117 "data_offset": 0, 00:17:24.117 "data_size": 0 00:17:24.117 }, 00:17:24.117 { 00:17:24.117 "name": "BaseBdev2", 00:17:24.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.117 "is_configured": false, 00:17:24.117 "data_offset": 0, 00:17:24.117 "data_size": 0 00:17:24.117 }, 00:17:24.117 { 00:17:24.117 "name": "BaseBdev3", 00:17:24.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.117 "is_configured": false, 00:17:24.117 "data_offset": 0, 00:17:24.117 "data_size": 0 00:17:24.117 } 00:17:24.117 ] 00:17:24.117 }' 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.117 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.682 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.682 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.682 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.682 [2024-10-01 13:51:34.576983] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.682 [2024-10-01 13:51:34.577507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:24.682 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.682 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:24.682 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.682 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.682 [2024-10-01 13:51:34.592996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.682 [2024-10-01 13:51:34.593182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.683 [2024-10-01 13:51:34.593204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.683 [2024-10-01 13:51:34.593219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.683 [2024-10-01 13:51:34.593227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.683 [2024-10-01 13:51:34.593240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.683 [2024-10-01 13:51:34.663565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.683 BaseBdev1 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.683 [ 00:17:24.683 { 00:17:24.683 "name": "BaseBdev1", 00:17:24.683 "aliases": [ 00:17:24.683 "cc5ea99f-e645-4c4f-9195-b0b85fbafca9" 00:17:24.683 ], 00:17:24.683 "product_name": "Malloc disk", 00:17:24.683 "block_size": 512, 00:17:24.683 "num_blocks": 65536, 00:17:24.683 "uuid": "cc5ea99f-e645-4c4f-9195-b0b85fbafca9", 00:17:24.683 "assigned_rate_limits": { 00:17:24.683 "rw_ios_per_sec": 0, 00:17:24.683 "rw_mbytes_per_sec": 0, 00:17:24.683 "r_mbytes_per_sec": 0, 00:17:24.683 "w_mbytes_per_sec": 0 00:17:24.683 }, 00:17:24.683 "claimed": true, 00:17:24.683 "claim_type": "exclusive_write", 00:17:24.683 "zoned": false, 00:17:24.683 "supported_io_types": { 00:17:24.683 "read": true, 00:17:24.683 "write": true, 00:17:24.683 "unmap": true, 00:17:24.683 "flush": true, 00:17:24.683 "reset": true, 00:17:24.683 "nvme_admin": false, 00:17:24.683 "nvme_io": false, 00:17:24.683 "nvme_io_md": false, 00:17:24.683 "write_zeroes": true, 00:17:24.683 "zcopy": true, 00:17:24.683 "get_zone_info": false, 00:17:24.683 "zone_management": false, 00:17:24.683 "zone_append": false, 00:17:24.683 "compare": false, 00:17:24.683 "compare_and_write": false, 00:17:24.683 "abort": true, 00:17:24.683 "seek_hole": false, 00:17:24.683 "seek_data": false, 00:17:24.683 "copy": true, 00:17:24.683 "nvme_iov_md": false 00:17:24.683 }, 00:17:24.683 "memory_domains": [ 00:17:24.683 { 00:17:24.683 "dma_device_id": "system", 00:17:24.683 "dma_device_type": 1 00:17:24.683 }, 00:17:24.683 { 00:17:24.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.683 "dma_device_type": 2 00:17:24.683 } 00:17:24.683 ], 00:17:24.683 "driver_specific": {} 00:17:24.683 } 00:17:24.683 ] 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.683 "name": "Existed_Raid", 00:17:24.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.683 "strip_size_kb": 64, 00:17:24.683 "state": "configuring", 00:17:24.683 "raid_level": "raid5f", 00:17:24.683 "superblock": false, 00:17:24.683 "num_base_bdevs": 3, 00:17:24.683 "num_base_bdevs_discovered": 1, 00:17:24.683 "num_base_bdevs_operational": 3, 00:17:24.683 "base_bdevs_list": [ 00:17:24.683 { 00:17:24.683 "name": "BaseBdev1", 00:17:24.683 "uuid": "cc5ea99f-e645-4c4f-9195-b0b85fbafca9", 00:17:24.683 "is_configured": true, 00:17:24.683 "data_offset": 0, 00:17:24.683 "data_size": 65536 00:17:24.683 }, 00:17:24.683 { 00:17:24.683 "name": "BaseBdev2", 00:17:24.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.683 "is_configured": false, 00:17:24.683 "data_offset": 0, 00:17:24.683 "data_size": 0 00:17:24.683 }, 00:17:24.683 { 00:17:24.683 "name": "BaseBdev3", 00:17:24.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.683 "is_configured": false, 00:17:24.683 "data_offset": 0, 00:17:24.683 "data_size": 0 00:17:24.683 } 00:17:24.683 ] 00:17:24.683 }' 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.683 13:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.251 [2024-10-01 13:51:35.199619] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.251 [2024-10-01 13:51:35.199675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.251 [2024-10-01 13:51:35.211661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.251 [2024-10-01 13:51:35.213926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.251 [2024-10-01 13:51:35.213977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.251 [2024-10-01 13:51:35.213989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.251 [2024-10-01 13:51:35.214002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.251 "name": "Existed_Raid", 00:17:25.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.251 "strip_size_kb": 64, 00:17:25.251 "state": "configuring", 00:17:25.251 "raid_level": "raid5f", 00:17:25.251 "superblock": false, 00:17:25.251 "num_base_bdevs": 3, 00:17:25.251 "num_base_bdevs_discovered": 1, 00:17:25.251 "num_base_bdevs_operational": 3, 00:17:25.251 "base_bdevs_list": [ 00:17:25.251 { 00:17:25.251 "name": "BaseBdev1", 00:17:25.251 "uuid": "cc5ea99f-e645-4c4f-9195-b0b85fbafca9", 00:17:25.251 "is_configured": true, 00:17:25.251 "data_offset": 0, 00:17:25.251 "data_size": 65536 00:17:25.251 }, 00:17:25.251 { 00:17:25.251 "name": "BaseBdev2", 00:17:25.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.251 "is_configured": false, 00:17:25.251 "data_offset": 0, 00:17:25.251 "data_size": 0 00:17:25.251 }, 00:17:25.251 { 00:17:25.251 "name": "BaseBdev3", 00:17:25.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.251 "is_configured": false, 00:17:25.251 "data_offset": 0, 00:17:25.251 "data_size": 0 00:17:25.251 } 00:17:25.251 ] 00:17:25.251 }' 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.251 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.511 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.511 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.511 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.774 [2024-10-01 13:51:35.721187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.774 BaseBdev2 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.774 [ 00:17:25.774 { 00:17:25.774 "name": "BaseBdev2", 00:17:25.774 "aliases": [ 00:17:25.774 "0b635fb2-18eb-4eda-98f9-20ff915276db" 00:17:25.774 ], 00:17:25.774 "product_name": "Malloc disk", 00:17:25.774 "block_size": 512, 00:17:25.774 "num_blocks": 65536, 00:17:25.774 "uuid": "0b635fb2-18eb-4eda-98f9-20ff915276db", 00:17:25.774 "assigned_rate_limits": { 00:17:25.774 "rw_ios_per_sec": 0, 00:17:25.774 "rw_mbytes_per_sec": 0, 00:17:25.774 "r_mbytes_per_sec": 0, 00:17:25.774 "w_mbytes_per_sec": 0 00:17:25.774 }, 00:17:25.774 "claimed": true, 00:17:25.774 "claim_type": "exclusive_write", 00:17:25.774 "zoned": false, 00:17:25.774 "supported_io_types": { 00:17:25.774 "read": true, 00:17:25.774 "write": true, 00:17:25.774 "unmap": true, 00:17:25.774 "flush": true, 00:17:25.774 "reset": true, 00:17:25.774 "nvme_admin": false, 00:17:25.774 "nvme_io": false, 00:17:25.774 "nvme_io_md": false, 00:17:25.774 "write_zeroes": true, 00:17:25.774 "zcopy": true, 00:17:25.774 "get_zone_info": false, 00:17:25.774 "zone_management": false, 00:17:25.774 "zone_append": false, 00:17:25.774 "compare": false, 00:17:25.774 "compare_and_write": false, 00:17:25.774 "abort": true, 00:17:25.774 "seek_hole": false, 00:17:25.774 "seek_data": false, 00:17:25.774 "copy": true, 00:17:25.774 "nvme_iov_md": false 00:17:25.774 }, 00:17:25.774 "memory_domains": [ 00:17:25.774 { 00:17:25.774 "dma_device_id": "system", 00:17:25.774 "dma_device_type": 1 00:17:25.774 }, 00:17:25.774 { 00:17:25.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.774 "dma_device_type": 2 00:17:25.774 } 00:17:25.774 ], 00:17:25.774 "driver_specific": {} 00:17:25.774 } 00:17:25.774 ] 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.774 "name": "Existed_Raid", 00:17:25.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.774 "strip_size_kb": 64, 00:17:25.774 "state": "configuring", 00:17:25.774 "raid_level": "raid5f", 00:17:25.774 "superblock": false, 00:17:25.774 "num_base_bdevs": 3, 00:17:25.774 "num_base_bdevs_discovered": 2, 00:17:25.774 "num_base_bdevs_operational": 3, 00:17:25.774 "base_bdevs_list": [ 00:17:25.774 { 00:17:25.774 "name": "BaseBdev1", 00:17:25.774 "uuid": "cc5ea99f-e645-4c4f-9195-b0b85fbafca9", 00:17:25.774 "is_configured": true, 00:17:25.774 "data_offset": 0, 00:17:25.774 "data_size": 65536 00:17:25.774 }, 00:17:25.774 { 00:17:25.774 "name": "BaseBdev2", 00:17:25.774 "uuid": "0b635fb2-18eb-4eda-98f9-20ff915276db", 00:17:25.774 "is_configured": true, 00:17:25.774 "data_offset": 0, 00:17:25.774 "data_size": 65536 00:17:25.774 }, 00:17:25.774 { 00:17:25.774 "name": "BaseBdev3", 00:17:25.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.774 "is_configured": false, 00:17:25.774 "data_offset": 0, 00:17:25.774 "data_size": 0 00:17:25.774 } 00:17:25.774 ] 00:17:25.774 }' 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.774 13:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 [2024-10-01 13:51:36.197903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.034 [2024-10-01 13:51:36.197965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:26.034 [2024-10-01 13:51:36.197980] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:26.034 [2024-10-01 13:51:36.198259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:26.034 [2024-10-01 13:51:36.204918] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:26.034 [2024-10-01 13:51:36.205062] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:26.034 [2024-10-01 13:51:36.205530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.034 BaseBdev3 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.034 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.293 [ 00:17:26.293 { 00:17:26.293 "name": "BaseBdev3", 00:17:26.293 "aliases": [ 00:17:26.293 "e4e3586c-c206-40c0-838b-950b62ef7ba8" 00:17:26.293 ], 00:17:26.293 "product_name": "Malloc disk", 00:17:26.293 "block_size": 512, 00:17:26.293 "num_blocks": 65536, 00:17:26.293 "uuid": "e4e3586c-c206-40c0-838b-950b62ef7ba8", 00:17:26.293 "assigned_rate_limits": { 00:17:26.293 "rw_ios_per_sec": 0, 00:17:26.293 "rw_mbytes_per_sec": 0, 00:17:26.293 "r_mbytes_per_sec": 0, 00:17:26.293 "w_mbytes_per_sec": 0 00:17:26.293 }, 00:17:26.293 "claimed": true, 00:17:26.293 "claim_type": "exclusive_write", 00:17:26.293 "zoned": false, 00:17:26.293 "supported_io_types": { 00:17:26.293 "read": true, 00:17:26.293 "write": true, 00:17:26.293 "unmap": true, 00:17:26.293 "flush": true, 00:17:26.293 "reset": true, 00:17:26.293 "nvme_admin": false, 00:17:26.293 "nvme_io": false, 00:17:26.293 "nvme_io_md": false, 00:17:26.293 "write_zeroes": true, 00:17:26.293 "zcopy": true, 00:17:26.293 "get_zone_info": false, 00:17:26.293 "zone_management": false, 00:17:26.293 "zone_append": false, 00:17:26.293 "compare": false, 00:17:26.293 "compare_and_write": false, 00:17:26.293 "abort": true, 00:17:26.293 "seek_hole": false, 00:17:26.293 "seek_data": false, 00:17:26.293 "copy": true, 00:17:26.293 "nvme_iov_md": false 00:17:26.293 }, 00:17:26.293 "memory_domains": [ 00:17:26.293 { 00:17:26.293 "dma_device_id": "system", 00:17:26.293 "dma_device_type": 1 00:17:26.293 }, 00:17:26.293 { 00:17:26.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.293 "dma_device_type": 2 00:17:26.293 } 00:17:26.293 ], 00:17:26.293 "driver_specific": {} 00:17:26.293 } 00:17:26.293 ] 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.293 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.293 "name": "Existed_Raid", 00:17:26.293 "uuid": "e5227d5e-5085-4ddb-a933-1955a1bd071c", 00:17:26.293 "strip_size_kb": 64, 00:17:26.293 "state": "online", 00:17:26.293 "raid_level": "raid5f", 00:17:26.293 "superblock": false, 00:17:26.293 "num_base_bdevs": 3, 00:17:26.293 "num_base_bdevs_discovered": 3, 00:17:26.293 "num_base_bdevs_operational": 3, 00:17:26.293 "base_bdevs_list": [ 00:17:26.293 { 00:17:26.293 "name": "BaseBdev1", 00:17:26.293 "uuid": "cc5ea99f-e645-4c4f-9195-b0b85fbafca9", 00:17:26.293 "is_configured": true, 00:17:26.293 "data_offset": 0, 00:17:26.293 "data_size": 65536 00:17:26.293 }, 00:17:26.293 { 00:17:26.293 "name": "BaseBdev2", 00:17:26.294 "uuid": "0b635fb2-18eb-4eda-98f9-20ff915276db", 00:17:26.294 "is_configured": true, 00:17:26.294 "data_offset": 0, 00:17:26.294 "data_size": 65536 00:17:26.294 }, 00:17:26.294 { 00:17:26.294 "name": "BaseBdev3", 00:17:26.294 "uuid": "e4e3586c-c206-40c0-838b-950b62ef7ba8", 00:17:26.294 "is_configured": true, 00:17:26.294 "data_offset": 0, 00:17:26.294 "data_size": 65536 00:17:26.294 } 00:17:26.294 ] 00:17:26.294 }' 00:17:26.294 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.294 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.554 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:26.554 [2024-10-01 13:51:36.728310] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.823 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.823 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:26.823 "name": "Existed_Raid", 00:17:26.823 "aliases": [ 00:17:26.823 "e5227d5e-5085-4ddb-a933-1955a1bd071c" 00:17:26.823 ], 00:17:26.823 "product_name": "Raid Volume", 00:17:26.823 "block_size": 512, 00:17:26.823 "num_blocks": 131072, 00:17:26.823 "uuid": "e5227d5e-5085-4ddb-a933-1955a1bd071c", 00:17:26.823 "assigned_rate_limits": { 00:17:26.823 "rw_ios_per_sec": 0, 00:17:26.823 "rw_mbytes_per_sec": 0, 00:17:26.823 "r_mbytes_per_sec": 0, 00:17:26.823 "w_mbytes_per_sec": 0 00:17:26.823 }, 00:17:26.823 "claimed": false, 00:17:26.823 "zoned": false, 00:17:26.824 "supported_io_types": { 00:17:26.824 "read": true, 00:17:26.824 "write": true, 00:17:26.824 "unmap": false, 00:17:26.824 "flush": false, 00:17:26.824 "reset": true, 00:17:26.824 "nvme_admin": false, 00:17:26.824 "nvme_io": false, 00:17:26.824 "nvme_io_md": false, 00:17:26.824 "write_zeroes": true, 00:17:26.824 "zcopy": false, 00:17:26.824 "get_zone_info": false, 00:17:26.824 "zone_management": false, 00:17:26.824 "zone_append": false, 00:17:26.824 "compare": false, 00:17:26.824 "compare_and_write": false, 00:17:26.824 "abort": false, 00:17:26.824 "seek_hole": false, 00:17:26.824 "seek_data": false, 00:17:26.824 "copy": false, 00:17:26.824 "nvme_iov_md": false 00:17:26.824 }, 00:17:26.824 "driver_specific": { 00:17:26.824 "raid": { 00:17:26.824 "uuid": "e5227d5e-5085-4ddb-a933-1955a1bd071c", 00:17:26.824 "strip_size_kb": 64, 00:17:26.824 "state": "online", 00:17:26.824 "raid_level": "raid5f", 00:17:26.824 "superblock": false, 00:17:26.824 "num_base_bdevs": 3, 00:17:26.824 "num_base_bdevs_discovered": 3, 00:17:26.824 "num_base_bdevs_operational": 3, 00:17:26.824 "base_bdevs_list": [ 00:17:26.824 { 00:17:26.824 "name": "BaseBdev1", 00:17:26.824 "uuid": "cc5ea99f-e645-4c4f-9195-b0b85fbafca9", 00:17:26.824 "is_configured": true, 00:17:26.824 "data_offset": 0, 00:17:26.824 "data_size": 65536 00:17:26.824 }, 00:17:26.824 { 00:17:26.824 "name": "BaseBdev2", 00:17:26.824 "uuid": "0b635fb2-18eb-4eda-98f9-20ff915276db", 00:17:26.824 "is_configured": true, 00:17:26.824 "data_offset": 0, 00:17:26.824 "data_size": 65536 00:17:26.824 }, 00:17:26.824 { 00:17:26.824 "name": "BaseBdev3", 00:17:26.824 "uuid": "e4e3586c-c206-40c0-838b-950b62ef7ba8", 00:17:26.824 "is_configured": true, 00:17:26.824 "data_offset": 0, 00:17:26.824 "data_size": 65536 00:17:26.824 } 00:17:26.824 ] 00:17:26.824 } 00:17:26.824 } 00:17:26.824 }' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:26.824 BaseBdev2 00:17:26.824 BaseBdev3' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.824 13:51:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:26.824 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.824 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.094 [2024-10-01 13:51:37.051685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.094 "name": "Existed_Raid", 00:17:27.094 "uuid": "e5227d5e-5085-4ddb-a933-1955a1bd071c", 00:17:27.094 "strip_size_kb": 64, 00:17:27.094 "state": "online", 00:17:27.094 "raid_level": "raid5f", 00:17:27.094 "superblock": false, 00:17:27.094 "num_base_bdevs": 3, 00:17:27.094 "num_base_bdevs_discovered": 2, 00:17:27.094 "num_base_bdevs_operational": 2, 00:17:27.094 "base_bdevs_list": [ 00:17:27.094 { 00:17:27.094 "name": null, 00:17:27.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.094 "is_configured": false, 00:17:27.094 "data_offset": 0, 00:17:27.094 "data_size": 65536 00:17:27.094 }, 00:17:27.094 { 00:17:27.094 "name": "BaseBdev2", 00:17:27.094 "uuid": "0b635fb2-18eb-4eda-98f9-20ff915276db", 00:17:27.094 "is_configured": true, 00:17:27.094 "data_offset": 0, 00:17:27.094 "data_size": 65536 00:17:27.094 }, 00:17:27.094 { 00:17:27.094 "name": "BaseBdev3", 00:17:27.094 "uuid": "e4e3586c-c206-40c0-838b-950b62ef7ba8", 00:17:27.094 "is_configured": true, 00:17:27.094 "data_offset": 0, 00:17:27.094 "data_size": 65536 00:17:27.094 } 00:17:27.094 ] 00:17:27.094 }' 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.094 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.660 [2024-10-01 13:51:37.623657] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.660 [2024-10-01 13:51:37.623774] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.660 [2024-10-01 13:51:37.722962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:27.660 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.661 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.661 [2024-10-01 13:51:37.774944] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:27.661 [2024-10-01 13:51:37.775143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.920 BaseBdev2 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.920 13:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.920 [ 00:17:27.920 { 00:17:27.920 "name": "BaseBdev2", 00:17:27.920 "aliases": [ 00:17:27.920 "63a8a956-8e5c-474a-bb6c-25953cb9f56b" 00:17:27.920 ], 00:17:27.920 "product_name": "Malloc disk", 00:17:27.920 "block_size": 512, 00:17:27.920 "num_blocks": 65536, 00:17:27.920 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:27.920 "assigned_rate_limits": { 00:17:27.920 "rw_ios_per_sec": 0, 00:17:27.920 "rw_mbytes_per_sec": 0, 00:17:27.920 "r_mbytes_per_sec": 0, 00:17:27.920 "w_mbytes_per_sec": 0 00:17:27.920 }, 00:17:27.920 "claimed": false, 00:17:27.920 "zoned": false, 00:17:27.920 "supported_io_types": { 00:17:27.920 "read": true, 00:17:27.920 "write": true, 00:17:27.920 "unmap": true, 00:17:27.920 "flush": true, 00:17:27.920 "reset": true, 00:17:27.920 "nvme_admin": false, 00:17:27.920 "nvme_io": false, 00:17:27.920 "nvme_io_md": false, 00:17:27.920 "write_zeroes": true, 00:17:27.920 "zcopy": true, 00:17:27.920 "get_zone_info": false, 00:17:27.920 "zone_management": false, 00:17:27.920 "zone_append": false, 00:17:27.920 "compare": false, 00:17:27.920 "compare_and_write": false, 00:17:27.920 "abort": true, 00:17:27.920 "seek_hole": false, 00:17:27.920 "seek_data": false, 00:17:27.920 "copy": true, 00:17:27.920 "nvme_iov_md": false 00:17:27.920 }, 00:17:27.920 "memory_domains": [ 00:17:27.920 { 00:17:27.920 "dma_device_id": "system", 00:17:27.920 "dma_device_type": 1 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.920 "dma_device_type": 2 00:17:27.920 } 00:17:27.920 ], 00:17:27.920 "driver_specific": {} 00:17:27.920 } 00:17:27.920 ] 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.920 BaseBdev3 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:27.920 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.921 [ 00:17:27.921 { 00:17:27.921 "name": "BaseBdev3", 00:17:27.921 "aliases": [ 00:17:27.921 "bef7707c-f886-4939-ba90-6f0bca6f35fa" 00:17:27.921 ], 00:17:27.921 "product_name": "Malloc disk", 00:17:27.921 "block_size": 512, 00:17:27.921 "num_blocks": 65536, 00:17:27.921 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:27.921 "assigned_rate_limits": { 00:17:27.921 "rw_ios_per_sec": 0, 00:17:27.921 "rw_mbytes_per_sec": 0, 00:17:27.921 "r_mbytes_per_sec": 0, 00:17:27.921 "w_mbytes_per_sec": 0 00:17:27.921 }, 00:17:27.921 "claimed": false, 00:17:27.921 "zoned": false, 00:17:27.921 "supported_io_types": { 00:17:27.921 "read": true, 00:17:27.921 "write": true, 00:17:27.921 "unmap": true, 00:17:27.921 "flush": true, 00:17:27.921 "reset": true, 00:17:27.921 "nvme_admin": false, 00:17:27.921 "nvme_io": false, 00:17:27.921 "nvme_io_md": false, 00:17:27.921 "write_zeroes": true, 00:17:27.921 "zcopy": true, 00:17:27.921 "get_zone_info": false, 00:17:27.921 "zone_management": false, 00:17:27.921 "zone_append": false, 00:17:27.921 "compare": false, 00:17:27.921 "compare_and_write": false, 00:17:27.921 "abort": true, 00:17:27.921 "seek_hole": false, 00:17:27.921 "seek_data": false, 00:17:27.921 "copy": true, 00:17:27.921 "nvme_iov_md": false 00:17:27.921 }, 00:17:27.921 "memory_domains": [ 00:17:27.921 { 00:17:27.921 "dma_device_id": "system", 00:17:27.921 "dma_device_type": 1 00:17:27.921 }, 00:17:27.921 { 00:17:27.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.921 "dma_device_type": 2 00:17:27.921 } 00:17:27.921 ], 00:17:27.921 "driver_specific": {} 00:17:27.921 } 00:17:27.921 ] 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.921 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.180 [2024-10-01 13:51:38.112824] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.180 [2024-10-01 13:51:38.113433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.180 [2024-10-01 13:51:38.113590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.180 [2024-10-01 13:51:38.115921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.180 "name": "Existed_Raid", 00:17:28.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.180 "strip_size_kb": 64, 00:17:28.180 "state": "configuring", 00:17:28.180 "raid_level": "raid5f", 00:17:28.180 "superblock": false, 00:17:28.180 "num_base_bdevs": 3, 00:17:28.180 "num_base_bdevs_discovered": 2, 00:17:28.180 "num_base_bdevs_operational": 3, 00:17:28.180 "base_bdevs_list": [ 00:17:28.180 { 00:17:28.180 "name": "BaseBdev1", 00:17:28.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.180 "is_configured": false, 00:17:28.180 "data_offset": 0, 00:17:28.180 "data_size": 0 00:17:28.180 }, 00:17:28.180 { 00:17:28.180 "name": "BaseBdev2", 00:17:28.180 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:28.180 "is_configured": true, 00:17:28.180 "data_offset": 0, 00:17:28.180 "data_size": 65536 00:17:28.180 }, 00:17:28.180 { 00:17:28.180 "name": "BaseBdev3", 00:17:28.180 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:28.180 "is_configured": true, 00:17:28.180 "data_offset": 0, 00:17:28.180 "data_size": 65536 00:17:28.180 } 00:17:28.180 ] 00:17:28.180 }' 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.180 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.439 [2024-10-01 13:51:38.524203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.439 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.439 "name": "Existed_Raid", 00:17:28.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.439 "strip_size_kb": 64, 00:17:28.439 "state": "configuring", 00:17:28.439 "raid_level": "raid5f", 00:17:28.439 "superblock": false, 00:17:28.439 "num_base_bdevs": 3, 00:17:28.439 "num_base_bdevs_discovered": 1, 00:17:28.439 "num_base_bdevs_operational": 3, 00:17:28.439 "base_bdevs_list": [ 00:17:28.439 { 00:17:28.439 "name": "BaseBdev1", 00:17:28.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.439 "is_configured": false, 00:17:28.439 "data_offset": 0, 00:17:28.439 "data_size": 0 00:17:28.439 }, 00:17:28.439 { 00:17:28.439 "name": null, 00:17:28.439 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:28.439 "is_configured": false, 00:17:28.439 "data_offset": 0, 00:17:28.440 "data_size": 65536 00:17:28.440 }, 00:17:28.440 { 00:17:28.440 "name": "BaseBdev3", 00:17:28.440 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:28.440 "is_configured": true, 00:17:28.440 "data_offset": 0, 00:17:28.440 "data_size": 65536 00:17:28.440 } 00:17:28.440 ] 00:17:28.440 }' 00:17:28.440 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.440 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.009 13:51:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.009 [2024-10-01 13:51:39.038784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.009 BaseBdev1 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.009 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.009 [ 00:17:29.009 { 00:17:29.009 "name": "BaseBdev1", 00:17:29.009 "aliases": [ 00:17:29.009 "e33a47af-bd8e-4fb4-bf86-982992086480" 00:17:29.009 ], 00:17:29.009 "product_name": "Malloc disk", 00:17:29.009 "block_size": 512, 00:17:29.009 "num_blocks": 65536, 00:17:29.009 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:29.009 "assigned_rate_limits": { 00:17:29.009 "rw_ios_per_sec": 0, 00:17:29.009 "rw_mbytes_per_sec": 0, 00:17:29.009 "r_mbytes_per_sec": 0, 00:17:29.009 "w_mbytes_per_sec": 0 00:17:29.009 }, 00:17:29.009 "claimed": true, 00:17:29.009 "claim_type": "exclusive_write", 00:17:29.009 "zoned": false, 00:17:29.009 "supported_io_types": { 00:17:29.009 "read": true, 00:17:29.009 "write": true, 00:17:29.009 "unmap": true, 00:17:29.009 "flush": true, 00:17:29.009 "reset": true, 00:17:29.009 "nvme_admin": false, 00:17:29.009 "nvme_io": false, 00:17:29.009 "nvme_io_md": false, 00:17:29.009 "write_zeroes": true, 00:17:29.009 "zcopy": true, 00:17:29.009 "get_zone_info": false, 00:17:29.009 "zone_management": false, 00:17:29.009 "zone_append": false, 00:17:29.009 "compare": false, 00:17:29.009 "compare_and_write": false, 00:17:29.009 "abort": true, 00:17:29.010 "seek_hole": false, 00:17:29.010 "seek_data": false, 00:17:29.010 "copy": true, 00:17:29.010 "nvme_iov_md": false 00:17:29.010 }, 00:17:29.010 "memory_domains": [ 00:17:29.010 { 00:17:29.010 "dma_device_id": "system", 00:17:29.010 "dma_device_type": 1 00:17:29.010 }, 00:17:29.010 { 00:17:29.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.010 "dma_device_type": 2 00:17:29.010 } 00:17:29.010 ], 00:17:29.010 "driver_specific": {} 00:17:29.010 } 00:17:29.010 ] 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.010 "name": "Existed_Raid", 00:17:29.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.010 "strip_size_kb": 64, 00:17:29.010 "state": "configuring", 00:17:29.010 "raid_level": "raid5f", 00:17:29.010 "superblock": false, 00:17:29.010 "num_base_bdevs": 3, 00:17:29.010 "num_base_bdevs_discovered": 2, 00:17:29.010 "num_base_bdevs_operational": 3, 00:17:29.010 "base_bdevs_list": [ 00:17:29.010 { 00:17:29.010 "name": "BaseBdev1", 00:17:29.010 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:29.010 "is_configured": true, 00:17:29.010 "data_offset": 0, 00:17:29.010 "data_size": 65536 00:17:29.010 }, 00:17:29.010 { 00:17:29.010 "name": null, 00:17:29.010 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:29.010 "is_configured": false, 00:17:29.010 "data_offset": 0, 00:17:29.010 "data_size": 65536 00:17:29.010 }, 00:17:29.010 { 00:17:29.010 "name": "BaseBdev3", 00:17:29.010 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:29.010 "is_configured": true, 00:17:29.010 "data_offset": 0, 00:17:29.010 "data_size": 65536 00:17:29.010 } 00:17:29.010 ] 00:17:29.010 }' 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.010 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.576 [2024-10-01 13:51:39.554302] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.576 "name": "Existed_Raid", 00:17:29.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.576 "strip_size_kb": 64, 00:17:29.576 "state": "configuring", 00:17:29.576 "raid_level": "raid5f", 00:17:29.576 "superblock": false, 00:17:29.576 "num_base_bdevs": 3, 00:17:29.576 "num_base_bdevs_discovered": 1, 00:17:29.576 "num_base_bdevs_operational": 3, 00:17:29.576 "base_bdevs_list": [ 00:17:29.576 { 00:17:29.576 "name": "BaseBdev1", 00:17:29.576 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:29.576 "is_configured": true, 00:17:29.576 "data_offset": 0, 00:17:29.576 "data_size": 65536 00:17:29.576 }, 00:17:29.576 { 00:17:29.576 "name": null, 00:17:29.576 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:29.576 "is_configured": false, 00:17:29.576 "data_offset": 0, 00:17:29.576 "data_size": 65536 00:17:29.576 }, 00:17:29.576 { 00:17:29.576 "name": null, 00:17:29.576 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:29.576 "is_configured": false, 00:17:29.576 "data_offset": 0, 00:17:29.576 "data_size": 65536 00:17:29.576 } 00:17:29.576 ] 00:17:29.576 }' 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.576 13:51:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.838 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:29.838 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.838 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.838 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.096 [2024-10-01 13:51:40.081641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.096 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.096 "name": "Existed_Raid", 00:17:30.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.097 "strip_size_kb": 64, 00:17:30.097 "state": "configuring", 00:17:30.097 "raid_level": "raid5f", 00:17:30.097 "superblock": false, 00:17:30.097 "num_base_bdevs": 3, 00:17:30.097 "num_base_bdevs_discovered": 2, 00:17:30.097 "num_base_bdevs_operational": 3, 00:17:30.097 "base_bdevs_list": [ 00:17:30.097 { 00:17:30.097 "name": "BaseBdev1", 00:17:30.097 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:30.097 "is_configured": true, 00:17:30.097 "data_offset": 0, 00:17:30.097 "data_size": 65536 00:17:30.097 }, 00:17:30.097 { 00:17:30.097 "name": null, 00:17:30.097 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:30.097 "is_configured": false, 00:17:30.097 "data_offset": 0, 00:17:30.097 "data_size": 65536 00:17:30.097 }, 00:17:30.097 { 00:17:30.097 "name": "BaseBdev3", 00:17:30.097 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:30.097 "is_configured": true, 00:17:30.097 "data_offset": 0, 00:17:30.097 "data_size": 65536 00:17:30.097 } 00:17:30.097 ] 00:17:30.097 }' 00:17:30.097 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.097 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.355 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.355 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.355 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.355 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:30.355 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.613 [2024-10-01 13:51:40.565647] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.613 "name": "Existed_Raid", 00:17:30.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.613 "strip_size_kb": 64, 00:17:30.613 "state": "configuring", 00:17:30.613 "raid_level": "raid5f", 00:17:30.613 "superblock": false, 00:17:30.613 "num_base_bdevs": 3, 00:17:30.613 "num_base_bdevs_discovered": 1, 00:17:30.613 "num_base_bdevs_operational": 3, 00:17:30.613 "base_bdevs_list": [ 00:17:30.613 { 00:17:30.613 "name": null, 00:17:30.613 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:30.613 "is_configured": false, 00:17:30.613 "data_offset": 0, 00:17:30.613 "data_size": 65536 00:17:30.613 }, 00:17:30.613 { 00:17:30.613 "name": null, 00:17:30.613 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:30.613 "is_configured": false, 00:17:30.613 "data_offset": 0, 00:17:30.613 "data_size": 65536 00:17:30.613 }, 00:17:30.613 { 00:17:30.613 "name": "BaseBdev3", 00:17:30.613 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:30.613 "is_configured": true, 00:17:30.613 "data_offset": 0, 00:17:30.613 "data_size": 65536 00:17:30.613 } 00:17:30.613 ] 00:17:30.613 }' 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.613 13:51:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.180 [2024-10-01 13:51:41.183761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.180 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.180 "name": "Existed_Raid", 00:17:31.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.180 "strip_size_kb": 64, 00:17:31.180 "state": "configuring", 00:17:31.180 "raid_level": "raid5f", 00:17:31.180 "superblock": false, 00:17:31.180 "num_base_bdevs": 3, 00:17:31.180 "num_base_bdevs_discovered": 2, 00:17:31.180 "num_base_bdevs_operational": 3, 00:17:31.180 "base_bdevs_list": [ 00:17:31.180 { 00:17:31.180 "name": null, 00:17:31.180 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:31.181 "is_configured": false, 00:17:31.181 "data_offset": 0, 00:17:31.181 "data_size": 65536 00:17:31.181 }, 00:17:31.181 { 00:17:31.181 "name": "BaseBdev2", 00:17:31.181 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:31.181 "is_configured": true, 00:17:31.181 "data_offset": 0, 00:17:31.181 "data_size": 65536 00:17:31.181 }, 00:17:31.181 { 00:17:31.181 "name": "BaseBdev3", 00:17:31.181 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:31.181 "is_configured": true, 00:17:31.181 "data_offset": 0, 00:17:31.181 "data_size": 65536 00:17:31.181 } 00:17:31.181 ] 00:17:31.181 }' 00:17:31.181 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.181 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e33a47af-bd8e-4fb4-bf86-982992086480 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.747 [2024-10-01 13:51:41.760330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:31.747 [2024-10-01 13:51:41.760451] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:31.747 [2024-10-01 13:51:41.760469] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:31.747 [2024-10-01 13:51:41.760837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:31.747 [2024-10-01 13:51:41.766639] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:31.747 [2024-10-01 13:51:41.766669] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:31.747 [2024-10-01 13:51:41.767035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.747 NewBaseBdev 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.747 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.747 [ 00:17:31.747 { 00:17:31.747 "name": "NewBaseBdev", 00:17:31.747 "aliases": [ 00:17:31.747 "e33a47af-bd8e-4fb4-bf86-982992086480" 00:17:31.747 ], 00:17:31.747 "product_name": "Malloc disk", 00:17:31.747 "block_size": 512, 00:17:31.747 "num_blocks": 65536, 00:17:31.747 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:31.747 "assigned_rate_limits": { 00:17:31.747 "rw_ios_per_sec": 0, 00:17:31.747 "rw_mbytes_per_sec": 0, 00:17:31.747 "r_mbytes_per_sec": 0, 00:17:31.747 "w_mbytes_per_sec": 0 00:17:31.747 }, 00:17:31.747 "claimed": true, 00:17:31.747 "claim_type": "exclusive_write", 00:17:31.747 "zoned": false, 00:17:31.747 "supported_io_types": { 00:17:31.747 "read": true, 00:17:31.747 "write": true, 00:17:31.747 "unmap": true, 00:17:31.747 "flush": true, 00:17:31.747 "reset": true, 00:17:31.747 "nvme_admin": false, 00:17:31.747 "nvme_io": false, 00:17:31.747 "nvme_io_md": false, 00:17:31.747 "write_zeroes": true, 00:17:31.747 "zcopy": true, 00:17:31.748 "get_zone_info": false, 00:17:31.748 "zone_management": false, 00:17:31.748 "zone_append": false, 00:17:31.748 "compare": false, 00:17:31.748 "compare_and_write": false, 00:17:31.748 "abort": true, 00:17:31.748 "seek_hole": false, 00:17:31.748 "seek_data": false, 00:17:31.748 "copy": true, 00:17:31.748 "nvme_iov_md": false 00:17:31.748 }, 00:17:31.748 "memory_domains": [ 00:17:31.748 { 00:17:31.748 "dma_device_id": "system", 00:17:31.748 "dma_device_type": 1 00:17:31.748 }, 00:17:31.748 { 00:17:31.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.748 "dma_device_type": 2 00:17:31.748 } 00:17:31.748 ], 00:17:31.748 "driver_specific": {} 00:17:31.748 } 00:17:31.748 ] 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.748 "name": "Existed_Raid", 00:17:31.748 "uuid": "82136c33-343a-4190-a4ee-16d6efef3f8f", 00:17:31.748 "strip_size_kb": 64, 00:17:31.748 "state": "online", 00:17:31.748 "raid_level": "raid5f", 00:17:31.748 "superblock": false, 00:17:31.748 "num_base_bdevs": 3, 00:17:31.748 "num_base_bdevs_discovered": 3, 00:17:31.748 "num_base_bdevs_operational": 3, 00:17:31.748 "base_bdevs_list": [ 00:17:31.748 { 00:17:31.748 "name": "NewBaseBdev", 00:17:31.748 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:31.748 "is_configured": true, 00:17:31.748 "data_offset": 0, 00:17:31.748 "data_size": 65536 00:17:31.748 }, 00:17:31.748 { 00:17:31.748 "name": "BaseBdev2", 00:17:31.748 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:31.748 "is_configured": true, 00:17:31.748 "data_offset": 0, 00:17:31.748 "data_size": 65536 00:17:31.748 }, 00:17:31.748 { 00:17:31.748 "name": "BaseBdev3", 00:17:31.748 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:31.748 "is_configured": true, 00:17:31.748 "data_offset": 0, 00:17:31.748 "data_size": 65536 00:17:31.748 } 00:17:31.748 ] 00:17:31.748 }' 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.748 13:51:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.315 [2024-10-01 13:51:42.249915] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.315 "name": "Existed_Raid", 00:17:32.315 "aliases": [ 00:17:32.315 "82136c33-343a-4190-a4ee-16d6efef3f8f" 00:17:32.315 ], 00:17:32.315 "product_name": "Raid Volume", 00:17:32.315 "block_size": 512, 00:17:32.315 "num_blocks": 131072, 00:17:32.315 "uuid": "82136c33-343a-4190-a4ee-16d6efef3f8f", 00:17:32.315 "assigned_rate_limits": { 00:17:32.315 "rw_ios_per_sec": 0, 00:17:32.315 "rw_mbytes_per_sec": 0, 00:17:32.315 "r_mbytes_per_sec": 0, 00:17:32.315 "w_mbytes_per_sec": 0 00:17:32.315 }, 00:17:32.315 "claimed": false, 00:17:32.315 "zoned": false, 00:17:32.315 "supported_io_types": { 00:17:32.315 "read": true, 00:17:32.315 "write": true, 00:17:32.315 "unmap": false, 00:17:32.315 "flush": false, 00:17:32.315 "reset": true, 00:17:32.315 "nvme_admin": false, 00:17:32.315 "nvme_io": false, 00:17:32.315 "nvme_io_md": false, 00:17:32.315 "write_zeroes": true, 00:17:32.315 "zcopy": false, 00:17:32.315 "get_zone_info": false, 00:17:32.315 "zone_management": false, 00:17:32.315 "zone_append": false, 00:17:32.315 "compare": false, 00:17:32.315 "compare_and_write": false, 00:17:32.315 "abort": false, 00:17:32.315 "seek_hole": false, 00:17:32.315 "seek_data": false, 00:17:32.315 "copy": false, 00:17:32.315 "nvme_iov_md": false 00:17:32.315 }, 00:17:32.315 "driver_specific": { 00:17:32.315 "raid": { 00:17:32.315 "uuid": "82136c33-343a-4190-a4ee-16d6efef3f8f", 00:17:32.315 "strip_size_kb": 64, 00:17:32.315 "state": "online", 00:17:32.315 "raid_level": "raid5f", 00:17:32.315 "superblock": false, 00:17:32.315 "num_base_bdevs": 3, 00:17:32.315 "num_base_bdevs_discovered": 3, 00:17:32.315 "num_base_bdevs_operational": 3, 00:17:32.315 "base_bdevs_list": [ 00:17:32.315 { 00:17:32.315 "name": "NewBaseBdev", 00:17:32.315 "uuid": "e33a47af-bd8e-4fb4-bf86-982992086480", 00:17:32.315 "is_configured": true, 00:17:32.315 "data_offset": 0, 00:17:32.315 "data_size": 65536 00:17:32.315 }, 00:17:32.315 { 00:17:32.315 "name": "BaseBdev2", 00:17:32.315 "uuid": "63a8a956-8e5c-474a-bb6c-25953cb9f56b", 00:17:32.315 "is_configured": true, 00:17:32.315 "data_offset": 0, 00:17:32.315 "data_size": 65536 00:17:32.315 }, 00:17:32.315 { 00:17:32.315 "name": "BaseBdev3", 00:17:32.315 "uuid": "bef7707c-f886-4939-ba90-6f0bca6f35fa", 00:17:32.315 "is_configured": true, 00:17:32.315 "data_offset": 0, 00:17:32.315 "data_size": 65536 00:17:32.315 } 00:17:32.315 ] 00:17:32.315 } 00:17:32.315 } 00:17:32.315 }' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:32.315 BaseBdev2 00:17:32.315 BaseBdev3' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.315 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.316 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.573 [2024-10-01 13:51:42.517356] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.573 [2024-10-01 13:51:42.517610] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.573 [2024-10-01 13:51:42.517792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.573 [2024-10-01 13:51:42.518269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.573 [2024-10-01 13:51:42.518304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79915 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79915 ']' 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79915 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79915 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.573 killing process with pid 79915 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79915' 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79915 00:17:32.573 [2024-10-01 13:51:42.574381] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.573 13:51:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79915 00:17:32.831 [2024-10-01 13:51:42.927190] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:34.207 00:17:34.207 real 0m11.208s 00:17:34.207 user 0m17.554s 00:17:34.207 sys 0m2.286s 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.207 ************************************ 00:17:34.207 END TEST raid5f_state_function_test 00:17:34.207 ************************************ 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.207 13:51:44 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:34.207 13:51:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:34.207 13:51:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.207 13:51:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.207 ************************************ 00:17:34.207 START TEST raid5f_state_function_test_sb 00:17:34.207 ************************************ 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:34.207 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:34.208 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80536 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:34.466 Process raid pid: 80536 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80536' 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80536 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80536 ']' 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:34.466 13:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.466 [2024-10-01 13:51:44.497964] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:34.466 [2024-10-01 13:51:44.498108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.725 [2024-10-01 13:51:44.676331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.725 [2024-10-01 13:51:44.915265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.993 [2024-10-01 13:51:45.143449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.993 [2024-10-01 13:51:45.143517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.265 [2024-10-01 13:51:45.392065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.265 [2024-10-01 13:51:45.392124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.265 [2024-10-01 13:51:45.392140] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.265 [2024-10-01 13:51:45.392154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.265 [2024-10-01 13:51:45.392162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.265 [2024-10-01 13:51:45.392175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.265 "name": "Existed_Raid", 00:17:35.265 "uuid": "616429e4-8470-4074-bf78-300f35a34b1a", 00:17:35.265 "strip_size_kb": 64, 00:17:35.265 "state": "configuring", 00:17:35.265 "raid_level": "raid5f", 00:17:35.265 "superblock": true, 00:17:35.265 "num_base_bdevs": 3, 00:17:35.265 "num_base_bdevs_discovered": 0, 00:17:35.265 "num_base_bdevs_operational": 3, 00:17:35.265 "base_bdevs_list": [ 00:17:35.265 { 00:17:35.265 "name": "BaseBdev1", 00:17:35.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.265 "is_configured": false, 00:17:35.265 "data_offset": 0, 00:17:35.265 "data_size": 0 00:17:35.265 }, 00:17:35.265 { 00:17:35.265 "name": "BaseBdev2", 00:17:35.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.265 "is_configured": false, 00:17:35.265 "data_offset": 0, 00:17:35.265 "data_size": 0 00:17:35.265 }, 00:17:35.265 { 00:17:35.265 "name": "BaseBdev3", 00:17:35.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.265 "is_configured": false, 00:17:35.265 "data_offset": 0, 00:17:35.265 "data_size": 0 00:17:35.265 } 00:17:35.265 ] 00:17:35.265 }' 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.265 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.832 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.833 [2024-10-01 13:51:45.811494] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.833 [2024-10-01 13:51:45.811555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.833 [2024-10-01 13:51:45.823536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.833 [2024-10-01 13:51:45.823589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.833 [2024-10-01 13:51:45.823600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.833 [2024-10-01 13:51:45.823613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.833 [2024-10-01 13:51:45.823622] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.833 [2024-10-01 13:51:45.823635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.833 [2024-10-01 13:51:45.891732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.833 BaseBdev1 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.833 [ 00:17:35.833 { 00:17:35.833 "name": "BaseBdev1", 00:17:35.833 "aliases": [ 00:17:35.833 "4a60f1c2-b6c7-4365-a0a9-fa5668708987" 00:17:35.833 ], 00:17:35.833 "product_name": "Malloc disk", 00:17:35.833 "block_size": 512, 00:17:35.833 "num_blocks": 65536, 00:17:35.833 "uuid": "4a60f1c2-b6c7-4365-a0a9-fa5668708987", 00:17:35.833 "assigned_rate_limits": { 00:17:35.833 "rw_ios_per_sec": 0, 00:17:35.833 "rw_mbytes_per_sec": 0, 00:17:35.833 "r_mbytes_per_sec": 0, 00:17:35.833 "w_mbytes_per_sec": 0 00:17:35.833 }, 00:17:35.833 "claimed": true, 00:17:35.833 "claim_type": "exclusive_write", 00:17:35.833 "zoned": false, 00:17:35.833 "supported_io_types": { 00:17:35.833 "read": true, 00:17:35.833 "write": true, 00:17:35.833 "unmap": true, 00:17:35.833 "flush": true, 00:17:35.833 "reset": true, 00:17:35.833 "nvme_admin": false, 00:17:35.833 "nvme_io": false, 00:17:35.833 "nvme_io_md": false, 00:17:35.833 "write_zeroes": true, 00:17:35.833 "zcopy": true, 00:17:35.833 "get_zone_info": false, 00:17:35.833 "zone_management": false, 00:17:35.833 "zone_append": false, 00:17:35.833 "compare": false, 00:17:35.833 "compare_and_write": false, 00:17:35.833 "abort": true, 00:17:35.833 "seek_hole": false, 00:17:35.833 "seek_data": false, 00:17:35.833 "copy": true, 00:17:35.833 "nvme_iov_md": false 00:17:35.833 }, 00:17:35.833 "memory_domains": [ 00:17:35.833 { 00:17:35.833 "dma_device_id": "system", 00:17:35.833 "dma_device_type": 1 00:17:35.833 }, 00:17:35.833 { 00:17:35.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.833 "dma_device_type": 2 00:17:35.833 } 00:17:35.833 ], 00:17:35.833 "driver_specific": {} 00:17:35.833 } 00:17:35.833 ] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.833 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.833 "name": "Existed_Raid", 00:17:35.833 "uuid": "ad7b7cdd-d7f1-45c0-9408-36347731a036", 00:17:35.833 "strip_size_kb": 64, 00:17:35.833 "state": "configuring", 00:17:35.833 "raid_level": "raid5f", 00:17:35.833 "superblock": true, 00:17:35.833 "num_base_bdevs": 3, 00:17:35.833 "num_base_bdevs_discovered": 1, 00:17:35.833 "num_base_bdevs_operational": 3, 00:17:35.833 "base_bdevs_list": [ 00:17:35.833 { 00:17:35.833 "name": "BaseBdev1", 00:17:35.833 "uuid": "4a60f1c2-b6c7-4365-a0a9-fa5668708987", 00:17:35.833 "is_configured": true, 00:17:35.834 "data_offset": 2048, 00:17:35.834 "data_size": 63488 00:17:35.834 }, 00:17:35.834 { 00:17:35.834 "name": "BaseBdev2", 00:17:35.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.834 "is_configured": false, 00:17:35.834 "data_offset": 0, 00:17:35.834 "data_size": 0 00:17:35.834 }, 00:17:35.834 { 00:17:35.834 "name": "BaseBdev3", 00:17:35.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.834 "is_configured": false, 00:17:35.834 "data_offset": 0, 00:17:35.834 "data_size": 0 00:17:35.834 } 00:17:35.834 ] 00:17:35.834 }' 00:17:35.834 13:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.834 13:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.401 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.402 [2024-10-01 13:51:46.363615] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.402 [2024-10-01 13:51:46.363867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.402 [2024-10-01 13:51:46.375784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.402 [2024-10-01 13:51:46.378110] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.402 [2024-10-01 13:51:46.378266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.402 [2024-10-01 13:51:46.378370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.402 [2024-10-01 13:51:46.378430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.402 "name": "Existed_Raid", 00:17:36.402 "uuid": "c36951a6-58b4-42cb-9ad4-e896acf349ce", 00:17:36.402 "strip_size_kb": 64, 00:17:36.402 "state": "configuring", 00:17:36.402 "raid_level": "raid5f", 00:17:36.402 "superblock": true, 00:17:36.402 "num_base_bdevs": 3, 00:17:36.402 "num_base_bdevs_discovered": 1, 00:17:36.402 "num_base_bdevs_operational": 3, 00:17:36.402 "base_bdevs_list": [ 00:17:36.402 { 00:17:36.402 "name": "BaseBdev1", 00:17:36.402 "uuid": "4a60f1c2-b6c7-4365-a0a9-fa5668708987", 00:17:36.402 "is_configured": true, 00:17:36.402 "data_offset": 2048, 00:17:36.402 "data_size": 63488 00:17:36.402 }, 00:17:36.402 { 00:17:36.402 "name": "BaseBdev2", 00:17:36.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.402 "is_configured": false, 00:17:36.402 "data_offset": 0, 00:17:36.402 "data_size": 0 00:17:36.402 }, 00:17:36.402 { 00:17:36.402 "name": "BaseBdev3", 00:17:36.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.402 "is_configured": false, 00:17:36.402 "data_offset": 0, 00:17:36.402 "data_size": 0 00:17:36.402 } 00:17:36.402 ] 00:17:36.402 }' 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.402 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.661 [2024-10-01 13:51:46.824362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.661 BaseBdev2 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.661 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.661 [ 00:17:36.661 { 00:17:36.661 "name": "BaseBdev2", 00:17:36.661 "aliases": [ 00:17:36.661 "3152e869-169d-4b5b-b0a9-3f489c389069" 00:17:36.661 ], 00:17:36.919 "product_name": "Malloc disk", 00:17:36.920 "block_size": 512, 00:17:36.920 "num_blocks": 65536, 00:17:36.920 "uuid": "3152e869-169d-4b5b-b0a9-3f489c389069", 00:17:36.920 "assigned_rate_limits": { 00:17:36.920 "rw_ios_per_sec": 0, 00:17:36.920 "rw_mbytes_per_sec": 0, 00:17:36.920 "r_mbytes_per_sec": 0, 00:17:36.920 "w_mbytes_per_sec": 0 00:17:36.920 }, 00:17:36.920 "claimed": true, 00:17:36.920 "claim_type": "exclusive_write", 00:17:36.920 "zoned": false, 00:17:36.920 "supported_io_types": { 00:17:36.920 "read": true, 00:17:36.920 "write": true, 00:17:36.920 "unmap": true, 00:17:36.920 "flush": true, 00:17:36.920 "reset": true, 00:17:36.920 "nvme_admin": false, 00:17:36.920 "nvme_io": false, 00:17:36.920 "nvme_io_md": false, 00:17:36.920 "write_zeroes": true, 00:17:36.920 "zcopy": true, 00:17:36.920 "get_zone_info": false, 00:17:36.920 "zone_management": false, 00:17:36.920 "zone_append": false, 00:17:36.920 "compare": false, 00:17:36.920 "compare_and_write": false, 00:17:36.920 "abort": true, 00:17:36.920 "seek_hole": false, 00:17:36.920 "seek_data": false, 00:17:36.920 "copy": true, 00:17:36.920 "nvme_iov_md": false 00:17:36.920 }, 00:17:36.920 "memory_domains": [ 00:17:36.920 { 00:17:36.920 "dma_device_id": "system", 00:17:36.920 "dma_device_type": 1 00:17:36.920 }, 00:17:36.920 { 00:17:36.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.920 "dma_device_type": 2 00:17:36.920 } 00:17:36.920 ], 00:17:36.920 "driver_specific": {} 00:17:36.920 } 00:17:36.920 ] 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.920 "name": "Existed_Raid", 00:17:36.920 "uuid": "c36951a6-58b4-42cb-9ad4-e896acf349ce", 00:17:36.920 "strip_size_kb": 64, 00:17:36.920 "state": "configuring", 00:17:36.920 "raid_level": "raid5f", 00:17:36.920 "superblock": true, 00:17:36.920 "num_base_bdevs": 3, 00:17:36.920 "num_base_bdevs_discovered": 2, 00:17:36.920 "num_base_bdevs_operational": 3, 00:17:36.920 "base_bdevs_list": [ 00:17:36.920 { 00:17:36.920 "name": "BaseBdev1", 00:17:36.920 "uuid": "4a60f1c2-b6c7-4365-a0a9-fa5668708987", 00:17:36.920 "is_configured": true, 00:17:36.920 "data_offset": 2048, 00:17:36.920 "data_size": 63488 00:17:36.920 }, 00:17:36.920 { 00:17:36.920 "name": "BaseBdev2", 00:17:36.920 "uuid": "3152e869-169d-4b5b-b0a9-3f489c389069", 00:17:36.920 "is_configured": true, 00:17:36.920 "data_offset": 2048, 00:17:36.920 "data_size": 63488 00:17:36.920 }, 00:17:36.920 { 00:17:36.920 "name": "BaseBdev3", 00:17:36.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.920 "is_configured": false, 00:17:36.920 "data_offset": 0, 00:17:36.920 "data_size": 0 00:17:36.920 } 00:17:36.920 ] 00:17:36.920 }' 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.920 13:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.179 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.180 [2024-10-01 13:51:47.312709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.180 [2024-10-01 13:51:47.313006] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:37.180 [2024-10-01 13:51:47.313030] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:37.180 [2024-10-01 13:51:47.313308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:37.180 BaseBdev3 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.180 [2024-10-01 13:51:47.318869] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:37.180 [2024-10-01 13:51:47.319009] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:37.180 [2024-10-01 13:51:47.319315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.180 [ 00:17:37.180 { 00:17:37.180 "name": "BaseBdev3", 00:17:37.180 "aliases": [ 00:17:37.180 "021a286e-5f13-48e6-8e70-aa1fbb3d93b0" 00:17:37.180 ], 00:17:37.180 "product_name": "Malloc disk", 00:17:37.180 "block_size": 512, 00:17:37.180 "num_blocks": 65536, 00:17:37.180 "uuid": "021a286e-5f13-48e6-8e70-aa1fbb3d93b0", 00:17:37.180 "assigned_rate_limits": { 00:17:37.180 "rw_ios_per_sec": 0, 00:17:37.180 "rw_mbytes_per_sec": 0, 00:17:37.180 "r_mbytes_per_sec": 0, 00:17:37.180 "w_mbytes_per_sec": 0 00:17:37.180 }, 00:17:37.180 "claimed": true, 00:17:37.180 "claim_type": "exclusive_write", 00:17:37.180 "zoned": false, 00:17:37.180 "supported_io_types": { 00:17:37.180 "read": true, 00:17:37.180 "write": true, 00:17:37.180 "unmap": true, 00:17:37.180 "flush": true, 00:17:37.180 "reset": true, 00:17:37.180 "nvme_admin": false, 00:17:37.180 "nvme_io": false, 00:17:37.180 "nvme_io_md": false, 00:17:37.180 "write_zeroes": true, 00:17:37.180 "zcopy": true, 00:17:37.180 "get_zone_info": false, 00:17:37.180 "zone_management": false, 00:17:37.180 "zone_append": false, 00:17:37.180 "compare": false, 00:17:37.180 "compare_and_write": false, 00:17:37.180 "abort": true, 00:17:37.180 "seek_hole": false, 00:17:37.180 "seek_data": false, 00:17:37.180 "copy": true, 00:17:37.180 "nvme_iov_md": false 00:17:37.180 }, 00:17:37.180 "memory_domains": [ 00:17:37.180 { 00:17:37.180 "dma_device_id": "system", 00:17:37.180 "dma_device_type": 1 00:17:37.180 }, 00:17:37.180 { 00:17:37.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.180 "dma_device_type": 2 00:17:37.180 } 00:17:37.180 ], 00:17:37.180 "driver_specific": {} 00:17:37.180 } 00:17:37.180 ] 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.180 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.441 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.441 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.441 "name": "Existed_Raid", 00:17:37.441 "uuid": "c36951a6-58b4-42cb-9ad4-e896acf349ce", 00:17:37.441 "strip_size_kb": 64, 00:17:37.441 "state": "online", 00:17:37.441 "raid_level": "raid5f", 00:17:37.441 "superblock": true, 00:17:37.441 "num_base_bdevs": 3, 00:17:37.441 "num_base_bdevs_discovered": 3, 00:17:37.441 "num_base_bdevs_operational": 3, 00:17:37.441 "base_bdevs_list": [ 00:17:37.441 { 00:17:37.441 "name": "BaseBdev1", 00:17:37.441 "uuid": "4a60f1c2-b6c7-4365-a0a9-fa5668708987", 00:17:37.441 "is_configured": true, 00:17:37.441 "data_offset": 2048, 00:17:37.441 "data_size": 63488 00:17:37.441 }, 00:17:37.441 { 00:17:37.441 "name": "BaseBdev2", 00:17:37.441 "uuid": "3152e869-169d-4b5b-b0a9-3f489c389069", 00:17:37.441 "is_configured": true, 00:17:37.441 "data_offset": 2048, 00:17:37.441 "data_size": 63488 00:17:37.441 }, 00:17:37.441 { 00:17:37.441 "name": "BaseBdev3", 00:17:37.441 "uuid": "021a286e-5f13-48e6-8e70-aa1fbb3d93b0", 00:17:37.441 "is_configured": true, 00:17:37.441 "data_offset": 2048, 00:17:37.441 "data_size": 63488 00:17:37.441 } 00:17:37.441 ] 00:17:37.441 }' 00:17:37.441 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.441 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.708 [2024-10-01 13:51:47.781013] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.708 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:37.708 "name": "Existed_Raid", 00:17:37.708 "aliases": [ 00:17:37.708 "c36951a6-58b4-42cb-9ad4-e896acf349ce" 00:17:37.708 ], 00:17:37.708 "product_name": "Raid Volume", 00:17:37.708 "block_size": 512, 00:17:37.708 "num_blocks": 126976, 00:17:37.708 "uuid": "c36951a6-58b4-42cb-9ad4-e896acf349ce", 00:17:37.708 "assigned_rate_limits": { 00:17:37.708 "rw_ios_per_sec": 0, 00:17:37.708 "rw_mbytes_per_sec": 0, 00:17:37.708 "r_mbytes_per_sec": 0, 00:17:37.708 "w_mbytes_per_sec": 0 00:17:37.708 }, 00:17:37.708 "claimed": false, 00:17:37.708 "zoned": false, 00:17:37.708 "supported_io_types": { 00:17:37.708 "read": true, 00:17:37.708 "write": true, 00:17:37.708 "unmap": false, 00:17:37.708 "flush": false, 00:17:37.708 "reset": true, 00:17:37.708 "nvme_admin": false, 00:17:37.708 "nvme_io": false, 00:17:37.708 "nvme_io_md": false, 00:17:37.708 "write_zeroes": true, 00:17:37.708 "zcopy": false, 00:17:37.708 "get_zone_info": false, 00:17:37.708 "zone_management": false, 00:17:37.708 "zone_append": false, 00:17:37.709 "compare": false, 00:17:37.709 "compare_and_write": false, 00:17:37.709 "abort": false, 00:17:37.709 "seek_hole": false, 00:17:37.709 "seek_data": false, 00:17:37.709 "copy": false, 00:17:37.709 "nvme_iov_md": false 00:17:37.709 }, 00:17:37.709 "driver_specific": { 00:17:37.709 "raid": { 00:17:37.709 "uuid": "c36951a6-58b4-42cb-9ad4-e896acf349ce", 00:17:37.709 "strip_size_kb": 64, 00:17:37.709 "state": "online", 00:17:37.709 "raid_level": "raid5f", 00:17:37.709 "superblock": true, 00:17:37.709 "num_base_bdevs": 3, 00:17:37.709 "num_base_bdevs_discovered": 3, 00:17:37.709 "num_base_bdevs_operational": 3, 00:17:37.709 "base_bdevs_list": [ 00:17:37.709 { 00:17:37.709 "name": "BaseBdev1", 00:17:37.709 "uuid": "4a60f1c2-b6c7-4365-a0a9-fa5668708987", 00:17:37.709 "is_configured": true, 00:17:37.709 "data_offset": 2048, 00:17:37.709 "data_size": 63488 00:17:37.709 }, 00:17:37.709 { 00:17:37.709 "name": "BaseBdev2", 00:17:37.709 "uuid": "3152e869-169d-4b5b-b0a9-3f489c389069", 00:17:37.709 "is_configured": true, 00:17:37.709 "data_offset": 2048, 00:17:37.709 "data_size": 63488 00:17:37.709 }, 00:17:37.709 { 00:17:37.709 "name": "BaseBdev3", 00:17:37.709 "uuid": "021a286e-5f13-48e6-8e70-aa1fbb3d93b0", 00:17:37.709 "is_configured": true, 00:17:37.709 "data_offset": 2048, 00:17:37.709 "data_size": 63488 00:17:37.709 } 00:17:37.709 ] 00:17:37.709 } 00:17:37.709 } 00:17:37.709 }' 00:17:37.709 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.709 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:37.709 BaseBdev2 00:17:37.709 BaseBdev3' 00:17:37.709 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.968 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:37.968 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.968 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:37.968 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.968 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.969 13:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.969 [2024-10-01 13:51:48.044512] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.969 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.228 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.228 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.228 "name": "Existed_Raid", 00:17:38.228 "uuid": "c36951a6-58b4-42cb-9ad4-e896acf349ce", 00:17:38.228 "strip_size_kb": 64, 00:17:38.228 "state": "online", 00:17:38.228 "raid_level": "raid5f", 00:17:38.228 "superblock": true, 00:17:38.228 "num_base_bdevs": 3, 00:17:38.228 "num_base_bdevs_discovered": 2, 00:17:38.228 "num_base_bdevs_operational": 2, 00:17:38.228 "base_bdevs_list": [ 00:17:38.228 { 00:17:38.228 "name": null, 00:17:38.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.228 "is_configured": false, 00:17:38.228 "data_offset": 0, 00:17:38.228 "data_size": 63488 00:17:38.228 }, 00:17:38.228 { 00:17:38.228 "name": "BaseBdev2", 00:17:38.228 "uuid": "3152e869-169d-4b5b-b0a9-3f489c389069", 00:17:38.228 "is_configured": true, 00:17:38.228 "data_offset": 2048, 00:17:38.228 "data_size": 63488 00:17:38.228 }, 00:17:38.228 { 00:17:38.228 "name": "BaseBdev3", 00:17:38.228 "uuid": "021a286e-5f13-48e6-8e70-aa1fbb3d93b0", 00:17:38.228 "is_configured": true, 00:17:38.228 "data_offset": 2048, 00:17:38.228 "data_size": 63488 00:17:38.228 } 00:17:38.228 ] 00:17:38.228 }' 00:17:38.228 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.228 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.488 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.488 [2024-10-01 13:51:48.637437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.488 [2024-10-01 13:51:48.637591] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.747 [2024-10-01 13:51:48.736210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 [2024-10-01 13:51:48.792197] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:38.747 [2024-10-01 13:51:48.792262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:38.747 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.017 BaseBdev2 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.017 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.018 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.018 13:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.018 [ 00:17:39.018 { 00:17:39.018 "name": "BaseBdev2", 00:17:39.018 "aliases": [ 00:17:39.018 "6478e16c-f98b-4b7b-ab96-58e163741592" 00:17:39.018 ], 00:17:39.018 "product_name": "Malloc disk", 00:17:39.018 "block_size": 512, 00:17:39.018 "num_blocks": 65536, 00:17:39.018 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:39.018 "assigned_rate_limits": { 00:17:39.018 "rw_ios_per_sec": 0, 00:17:39.018 "rw_mbytes_per_sec": 0, 00:17:39.018 "r_mbytes_per_sec": 0, 00:17:39.018 "w_mbytes_per_sec": 0 00:17:39.018 }, 00:17:39.018 "claimed": false, 00:17:39.018 "zoned": false, 00:17:39.018 "supported_io_types": { 00:17:39.018 "read": true, 00:17:39.018 "write": true, 00:17:39.018 "unmap": true, 00:17:39.018 "flush": true, 00:17:39.018 "reset": true, 00:17:39.018 "nvme_admin": false, 00:17:39.018 "nvme_io": false, 00:17:39.018 "nvme_io_md": false, 00:17:39.018 "write_zeroes": true, 00:17:39.018 "zcopy": true, 00:17:39.018 "get_zone_info": false, 00:17:39.018 "zone_management": false, 00:17:39.018 "zone_append": false, 00:17:39.018 "compare": false, 00:17:39.018 "compare_and_write": false, 00:17:39.018 "abort": true, 00:17:39.018 "seek_hole": false, 00:17:39.018 "seek_data": false, 00:17:39.018 "copy": true, 00:17:39.018 "nvme_iov_md": false 00:17:39.018 }, 00:17:39.018 "memory_domains": [ 00:17:39.018 { 00:17:39.018 "dma_device_id": "system", 00:17:39.018 "dma_device_type": 1 00:17:39.018 }, 00:17:39.018 { 00:17:39.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.018 "dma_device_type": 2 00:17:39.018 } 00:17:39.018 ], 00:17:39.018 "driver_specific": {} 00:17:39.018 } 00:17:39.018 ] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.018 BaseBdev3 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.018 [ 00:17:39.018 { 00:17:39.018 "name": "BaseBdev3", 00:17:39.018 "aliases": [ 00:17:39.018 "1a8e8c88-2e91-4934-83ba-3880a5453fd8" 00:17:39.018 ], 00:17:39.018 "product_name": "Malloc disk", 00:17:39.018 "block_size": 512, 00:17:39.018 "num_blocks": 65536, 00:17:39.018 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:39.018 "assigned_rate_limits": { 00:17:39.018 "rw_ios_per_sec": 0, 00:17:39.018 "rw_mbytes_per_sec": 0, 00:17:39.018 "r_mbytes_per_sec": 0, 00:17:39.018 "w_mbytes_per_sec": 0 00:17:39.018 }, 00:17:39.018 "claimed": false, 00:17:39.018 "zoned": false, 00:17:39.018 "supported_io_types": { 00:17:39.018 "read": true, 00:17:39.018 "write": true, 00:17:39.018 "unmap": true, 00:17:39.018 "flush": true, 00:17:39.018 "reset": true, 00:17:39.018 "nvme_admin": false, 00:17:39.018 "nvme_io": false, 00:17:39.018 "nvme_io_md": false, 00:17:39.018 "write_zeroes": true, 00:17:39.018 "zcopy": true, 00:17:39.018 "get_zone_info": false, 00:17:39.018 "zone_management": false, 00:17:39.018 "zone_append": false, 00:17:39.018 "compare": false, 00:17:39.018 "compare_and_write": false, 00:17:39.018 "abort": true, 00:17:39.018 "seek_hole": false, 00:17:39.018 "seek_data": false, 00:17:39.018 "copy": true, 00:17:39.018 "nvme_iov_md": false 00:17:39.018 }, 00:17:39.018 "memory_domains": [ 00:17:39.018 { 00:17:39.018 "dma_device_id": "system", 00:17:39.018 "dma_device_type": 1 00:17:39.018 }, 00:17:39.018 { 00:17:39.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.018 "dma_device_type": 2 00:17:39.018 } 00:17:39.018 ], 00:17:39.018 "driver_specific": {} 00:17:39.018 } 00:17:39.018 ] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.018 [2024-10-01 13:51:49.114320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.018 [2024-10-01 13:51:49.114379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.018 [2024-10-01 13:51:49.114422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.018 [2024-10-01 13:51:49.116591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.018 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.019 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.019 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.019 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.019 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.019 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.019 "name": "Existed_Raid", 00:17:39.019 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:39.019 "strip_size_kb": 64, 00:17:39.019 "state": "configuring", 00:17:39.019 "raid_level": "raid5f", 00:17:39.019 "superblock": true, 00:17:39.019 "num_base_bdevs": 3, 00:17:39.019 "num_base_bdevs_discovered": 2, 00:17:39.019 "num_base_bdevs_operational": 3, 00:17:39.019 "base_bdevs_list": [ 00:17:39.019 { 00:17:39.019 "name": "BaseBdev1", 00:17:39.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.019 "is_configured": false, 00:17:39.019 "data_offset": 0, 00:17:39.019 "data_size": 0 00:17:39.019 }, 00:17:39.019 { 00:17:39.019 "name": "BaseBdev2", 00:17:39.019 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:39.019 "is_configured": true, 00:17:39.019 "data_offset": 2048, 00:17:39.019 "data_size": 63488 00:17:39.019 }, 00:17:39.019 { 00:17:39.019 "name": "BaseBdev3", 00:17:39.019 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:39.019 "is_configured": true, 00:17:39.019 "data_offset": 2048, 00:17:39.019 "data_size": 63488 00:17:39.019 } 00:17:39.019 ] 00:17:39.019 }' 00:17:39.019 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.019 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.587 [2024-10-01 13:51:49.569620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.587 "name": "Existed_Raid", 00:17:39.587 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:39.587 "strip_size_kb": 64, 00:17:39.587 "state": "configuring", 00:17:39.587 "raid_level": "raid5f", 00:17:39.587 "superblock": true, 00:17:39.587 "num_base_bdevs": 3, 00:17:39.587 "num_base_bdevs_discovered": 1, 00:17:39.587 "num_base_bdevs_operational": 3, 00:17:39.587 "base_bdevs_list": [ 00:17:39.587 { 00:17:39.587 "name": "BaseBdev1", 00:17:39.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.587 "is_configured": false, 00:17:39.587 "data_offset": 0, 00:17:39.587 "data_size": 0 00:17:39.587 }, 00:17:39.587 { 00:17:39.587 "name": null, 00:17:39.587 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:39.587 "is_configured": false, 00:17:39.587 "data_offset": 0, 00:17:39.587 "data_size": 63488 00:17:39.587 }, 00:17:39.587 { 00:17:39.587 "name": "BaseBdev3", 00:17:39.587 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:39.587 "is_configured": true, 00:17:39.587 "data_offset": 2048, 00:17:39.587 "data_size": 63488 00:17:39.587 } 00:17:39.587 ] 00:17:39.587 }' 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.587 13:51:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.212 [2024-10-01 13:51:50.121843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.212 BaseBdev1 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.212 [ 00:17:40.212 { 00:17:40.212 "name": "BaseBdev1", 00:17:40.212 "aliases": [ 00:17:40.212 "cc5b91f4-b984-4425-98dd-ea02a451a2de" 00:17:40.212 ], 00:17:40.212 "product_name": "Malloc disk", 00:17:40.212 "block_size": 512, 00:17:40.212 "num_blocks": 65536, 00:17:40.212 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:40.212 "assigned_rate_limits": { 00:17:40.212 "rw_ios_per_sec": 0, 00:17:40.212 "rw_mbytes_per_sec": 0, 00:17:40.212 "r_mbytes_per_sec": 0, 00:17:40.212 "w_mbytes_per_sec": 0 00:17:40.212 }, 00:17:40.212 "claimed": true, 00:17:40.212 "claim_type": "exclusive_write", 00:17:40.212 "zoned": false, 00:17:40.212 "supported_io_types": { 00:17:40.212 "read": true, 00:17:40.212 "write": true, 00:17:40.212 "unmap": true, 00:17:40.212 "flush": true, 00:17:40.212 "reset": true, 00:17:40.212 "nvme_admin": false, 00:17:40.212 "nvme_io": false, 00:17:40.212 "nvme_io_md": false, 00:17:40.212 "write_zeroes": true, 00:17:40.212 "zcopy": true, 00:17:40.212 "get_zone_info": false, 00:17:40.212 "zone_management": false, 00:17:40.212 "zone_append": false, 00:17:40.212 "compare": false, 00:17:40.212 "compare_and_write": false, 00:17:40.212 "abort": true, 00:17:40.212 "seek_hole": false, 00:17:40.212 "seek_data": false, 00:17:40.212 "copy": true, 00:17:40.212 "nvme_iov_md": false 00:17:40.212 }, 00:17:40.212 "memory_domains": [ 00:17:40.212 { 00:17:40.212 "dma_device_id": "system", 00:17:40.212 "dma_device_type": 1 00:17:40.212 }, 00:17:40.212 { 00:17:40.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.212 "dma_device_type": 2 00:17:40.212 } 00:17:40.212 ], 00:17:40.212 "driver_specific": {} 00:17:40.212 } 00:17:40.212 ] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.212 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.212 "name": "Existed_Raid", 00:17:40.212 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:40.212 "strip_size_kb": 64, 00:17:40.212 "state": "configuring", 00:17:40.212 "raid_level": "raid5f", 00:17:40.212 "superblock": true, 00:17:40.212 "num_base_bdevs": 3, 00:17:40.212 "num_base_bdevs_discovered": 2, 00:17:40.212 "num_base_bdevs_operational": 3, 00:17:40.212 "base_bdevs_list": [ 00:17:40.212 { 00:17:40.212 "name": "BaseBdev1", 00:17:40.212 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:40.212 "is_configured": true, 00:17:40.212 "data_offset": 2048, 00:17:40.212 "data_size": 63488 00:17:40.212 }, 00:17:40.213 { 00:17:40.213 "name": null, 00:17:40.213 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:40.213 "is_configured": false, 00:17:40.213 "data_offset": 0, 00:17:40.213 "data_size": 63488 00:17:40.213 }, 00:17:40.213 { 00:17:40.213 "name": "BaseBdev3", 00:17:40.213 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:40.213 "is_configured": true, 00:17:40.213 "data_offset": 2048, 00:17:40.213 "data_size": 63488 00:17:40.213 } 00:17:40.213 ] 00:17:40.213 }' 00:17:40.213 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.213 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.472 [2024-10-01 13:51:50.625284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.472 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.732 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.732 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.732 "name": "Existed_Raid", 00:17:40.732 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:40.732 "strip_size_kb": 64, 00:17:40.732 "state": "configuring", 00:17:40.732 "raid_level": "raid5f", 00:17:40.732 "superblock": true, 00:17:40.732 "num_base_bdevs": 3, 00:17:40.732 "num_base_bdevs_discovered": 1, 00:17:40.732 "num_base_bdevs_operational": 3, 00:17:40.732 "base_bdevs_list": [ 00:17:40.732 { 00:17:40.732 "name": "BaseBdev1", 00:17:40.732 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:40.732 "is_configured": true, 00:17:40.732 "data_offset": 2048, 00:17:40.732 "data_size": 63488 00:17:40.732 }, 00:17:40.732 { 00:17:40.732 "name": null, 00:17:40.732 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:40.732 "is_configured": false, 00:17:40.732 "data_offset": 0, 00:17:40.732 "data_size": 63488 00:17:40.732 }, 00:17:40.732 { 00:17:40.732 "name": null, 00:17:40.732 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:40.732 "is_configured": false, 00:17:40.732 "data_offset": 0, 00:17:40.732 "data_size": 63488 00:17:40.732 } 00:17:40.732 ] 00:17:40.732 }' 00:17:40.732 13:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.732 13:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.991 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.992 [2024-10-01 13:51:51.108636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.992 "name": "Existed_Raid", 00:17:40.992 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:40.992 "strip_size_kb": 64, 00:17:40.992 "state": "configuring", 00:17:40.992 "raid_level": "raid5f", 00:17:40.992 "superblock": true, 00:17:40.992 "num_base_bdevs": 3, 00:17:40.992 "num_base_bdevs_discovered": 2, 00:17:40.992 "num_base_bdevs_operational": 3, 00:17:40.992 "base_bdevs_list": [ 00:17:40.992 { 00:17:40.992 "name": "BaseBdev1", 00:17:40.992 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:40.992 "is_configured": true, 00:17:40.992 "data_offset": 2048, 00:17:40.992 "data_size": 63488 00:17:40.992 }, 00:17:40.992 { 00:17:40.992 "name": null, 00:17:40.992 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:40.992 "is_configured": false, 00:17:40.992 "data_offset": 0, 00:17:40.992 "data_size": 63488 00:17:40.992 }, 00:17:40.992 { 00:17:40.992 "name": "BaseBdev3", 00:17:40.992 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:40.992 "is_configured": true, 00:17:40.992 "data_offset": 2048, 00:17:40.992 "data_size": 63488 00:17:40.992 } 00:17:40.992 ] 00:17:40.992 }' 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.992 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.560 [2024-10-01 13:51:51.619990] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.560 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.819 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.819 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.819 "name": "Existed_Raid", 00:17:41.819 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:41.819 "strip_size_kb": 64, 00:17:41.819 "state": "configuring", 00:17:41.819 "raid_level": "raid5f", 00:17:41.819 "superblock": true, 00:17:41.820 "num_base_bdevs": 3, 00:17:41.820 "num_base_bdevs_discovered": 1, 00:17:41.820 "num_base_bdevs_operational": 3, 00:17:41.820 "base_bdevs_list": [ 00:17:41.820 { 00:17:41.820 "name": null, 00:17:41.820 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:41.820 "is_configured": false, 00:17:41.820 "data_offset": 0, 00:17:41.820 "data_size": 63488 00:17:41.820 }, 00:17:41.820 { 00:17:41.820 "name": null, 00:17:41.820 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:41.820 "is_configured": false, 00:17:41.820 "data_offset": 0, 00:17:41.820 "data_size": 63488 00:17:41.820 }, 00:17:41.820 { 00:17:41.820 "name": "BaseBdev3", 00:17:41.820 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:41.820 "is_configured": true, 00:17:41.820 "data_offset": 2048, 00:17:41.820 "data_size": 63488 00:17:41.820 } 00:17:41.820 ] 00:17:41.820 }' 00:17:41.820 13:51:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.820 13:51:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.079 [2024-10-01 13:51:52.259900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.079 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.337 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.337 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.337 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.337 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.337 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.337 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.337 "name": "Existed_Raid", 00:17:42.337 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:42.337 "strip_size_kb": 64, 00:17:42.337 "state": "configuring", 00:17:42.337 "raid_level": "raid5f", 00:17:42.338 "superblock": true, 00:17:42.338 "num_base_bdevs": 3, 00:17:42.338 "num_base_bdevs_discovered": 2, 00:17:42.338 "num_base_bdevs_operational": 3, 00:17:42.338 "base_bdevs_list": [ 00:17:42.338 { 00:17:42.338 "name": null, 00:17:42.338 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:42.338 "is_configured": false, 00:17:42.338 "data_offset": 0, 00:17:42.338 "data_size": 63488 00:17:42.338 }, 00:17:42.338 { 00:17:42.338 "name": "BaseBdev2", 00:17:42.338 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:42.338 "is_configured": true, 00:17:42.338 "data_offset": 2048, 00:17:42.338 "data_size": 63488 00:17:42.338 }, 00:17:42.338 { 00:17:42.338 "name": "BaseBdev3", 00:17:42.338 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:42.338 "is_configured": true, 00:17:42.338 "data_offset": 2048, 00:17:42.338 "data_size": 63488 00:17:42.338 } 00:17:42.338 ] 00:17:42.338 }' 00:17:42.338 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.338 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc5b91f4-b984-4425-98dd-ea02a451a2de 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.597 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.916 [2024-10-01 13:51:52.820203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:42.916 [2024-10-01 13:51:52.820498] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:42.916 [2024-10-01 13:51:52.820519] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:42.916 [2024-10-01 13:51:52.820794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:42.916 NewBaseBdev 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.916 [2024-10-01 13:51:52.826513] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:42.916 [2024-10-01 13:51:52.826546] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:42.916 [2024-10-01 13:51:52.826741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.916 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.916 [ 00:17:42.916 { 00:17:42.916 "name": "NewBaseBdev", 00:17:42.916 "aliases": [ 00:17:42.916 "cc5b91f4-b984-4425-98dd-ea02a451a2de" 00:17:42.916 ], 00:17:42.916 "product_name": "Malloc disk", 00:17:42.917 "block_size": 512, 00:17:42.917 "num_blocks": 65536, 00:17:42.917 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:42.917 "assigned_rate_limits": { 00:17:42.917 "rw_ios_per_sec": 0, 00:17:42.917 "rw_mbytes_per_sec": 0, 00:17:42.917 "r_mbytes_per_sec": 0, 00:17:42.917 "w_mbytes_per_sec": 0 00:17:42.917 }, 00:17:42.917 "claimed": true, 00:17:42.917 "claim_type": "exclusive_write", 00:17:42.917 "zoned": false, 00:17:42.917 "supported_io_types": { 00:17:42.917 "read": true, 00:17:42.917 "write": true, 00:17:42.917 "unmap": true, 00:17:42.917 "flush": true, 00:17:42.917 "reset": true, 00:17:42.917 "nvme_admin": false, 00:17:42.917 "nvme_io": false, 00:17:42.917 "nvme_io_md": false, 00:17:42.917 "write_zeroes": true, 00:17:42.917 "zcopy": true, 00:17:42.917 "get_zone_info": false, 00:17:42.917 "zone_management": false, 00:17:42.917 "zone_append": false, 00:17:42.917 "compare": false, 00:17:42.917 "compare_and_write": false, 00:17:42.917 "abort": true, 00:17:42.917 "seek_hole": false, 00:17:42.917 "seek_data": false, 00:17:42.917 "copy": true, 00:17:42.917 "nvme_iov_md": false 00:17:42.917 }, 00:17:42.917 "memory_domains": [ 00:17:42.917 { 00:17:42.917 "dma_device_id": "system", 00:17:42.917 "dma_device_type": 1 00:17:42.917 }, 00:17:42.917 { 00:17:42.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.917 "dma_device_type": 2 00:17:42.917 } 00:17:42.917 ], 00:17:42.917 "driver_specific": {} 00:17:42.917 } 00:17:42.917 ] 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.917 "name": "Existed_Raid", 00:17:42.917 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:42.917 "strip_size_kb": 64, 00:17:42.917 "state": "online", 00:17:42.917 "raid_level": "raid5f", 00:17:42.917 "superblock": true, 00:17:42.917 "num_base_bdevs": 3, 00:17:42.917 "num_base_bdevs_discovered": 3, 00:17:42.917 "num_base_bdevs_operational": 3, 00:17:42.917 "base_bdevs_list": [ 00:17:42.917 { 00:17:42.917 "name": "NewBaseBdev", 00:17:42.917 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:42.917 "is_configured": true, 00:17:42.917 "data_offset": 2048, 00:17:42.917 "data_size": 63488 00:17:42.917 }, 00:17:42.917 { 00:17:42.917 "name": "BaseBdev2", 00:17:42.917 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:42.917 "is_configured": true, 00:17:42.917 "data_offset": 2048, 00:17:42.917 "data_size": 63488 00:17:42.917 }, 00:17:42.917 { 00:17:42.917 "name": "BaseBdev3", 00:17:42.917 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:42.917 "is_configured": true, 00:17:42.917 "data_offset": 2048, 00:17:42.917 "data_size": 63488 00:17:42.917 } 00:17:42.917 ] 00:17:42.917 }' 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.917 13:51:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.176 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.435 [2024-10-01 13:51:53.373114] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.435 "name": "Existed_Raid", 00:17:43.435 "aliases": [ 00:17:43.435 "edb3a6d4-2d8a-4c8b-895e-a2137c324564" 00:17:43.435 ], 00:17:43.435 "product_name": "Raid Volume", 00:17:43.435 "block_size": 512, 00:17:43.435 "num_blocks": 126976, 00:17:43.435 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:43.435 "assigned_rate_limits": { 00:17:43.435 "rw_ios_per_sec": 0, 00:17:43.435 "rw_mbytes_per_sec": 0, 00:17:43.435 "r_mbytes_per_sec": 0, 00:17:43.435 "w_mbytes_per_sec": 0 00:17:43.435 }, 00:17:43.435 "claimed": false, 00:17:43.435 "zoned": false, 00:17:43.435 "supported_io_types": { 00:17:43.435 "read": true, 00:17:43.435 "write": true, 00:17:43.435 "unmap": false, 00:17:43.435 "flush": false, 00:17:43.435 "reset": true, 00:17:43.435 "nvme_admin": false, 00:17:43.435 "nvme_io": false, 00:17:43.435 "nvme_io_md": false, 00:17:43.435 "write_zeroes": true, 00:17:43.435 "zcopy": false, 00:17:43.435 "get_zone_info": false, 00:17:43.435 "zone_management": false, 00:17:43.435 "zone_append": false, 00:17:43.435 "compare": false, 00:17:43.435 "compare_and_write": false, 00:17:43.435 "abort": false, 00:17:43.435 "seek_hole": false, 00:17:43.435 "seek_data": false, 00:17:43.435 "copy": false, 00:17:43.435 "nvme_iov_md": false 00:17:43.435 }, 00:17:43.435 "driver_specific": { 00:17:43.435 "raid": { 00:17:43.435 "uuid": "edb3a6d4-2d8a-4c8b-895e-a2137c324564", 00:17:43.435 "strip_size_kb": 64, 00:17:43.435 "state": "online", 00:17:43.435 "raid_level": "raid5f", 00:17:43.435 "superblock": true, 00:17:43.435 "num_base_bdevs": 3, 00:17:43.435 "num_base_bdevs_discovered": 3, 00:17:43.435 "num_base_bdevs_operational": 3, 00:17:43.435 "base_bdevs_list": [ 00:17:43.435 { 00:17:43.435 "name": "NewBaseBdev", 00:17:43.435 "uuid": "cc5b91f4-b984-4425-98dd-ea02a451a2de", 00:17:43.435 "is_configured": true, 00:17:43.435 "data_offset": 2048, 00:17:43.435 "data_size": 63488 00:17:43.435 }, 00:17:43.435 { 00:17:43.435 "name": "BaseBdev2", 00:17:43.435 "uuid": "6478e16c-f98b-4b7b-ab96-58e163741592", 00:17:43.435 "is_configured": true, 00:17:43.435 "data_offset": 2048, 00:17:43.435 "data_size": 63488 00:17:43.435 }, 00:17:43.435 { 00:17:43.435 "name": "BaseBdev3", 00:17:43.435 "uuid": "1a8e8c88-2e91-4934-83ba-3880a5453fd8", 00:17:43.435 "is_configured": true, 00:17:43.435 "data_offset": 2048, 00:17:43.435 "data_size": 63488 00:17:43.435 } 00:17:43.435 ] 00:17:43.435 } 00:17:43.435 } 00:17:43.435 }' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:43.435 BaseBdev2 00:17:43.435 BaseBdev3' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.435 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.694 [2024-10-01 13:51:53.652477] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.694 [2024-10-01 13:51:53.652516] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.694 [2024-10-01 13:51:53.652610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.694 [2024-10-01 13:51:53.652908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.694 [2024-10-01 13:51:53.652934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80536 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80536 ']' 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80536 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80536 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.694 killing process with pid 80536 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80536' 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80536 00:17:43.694 [2024-10-01 13:51:53.704627] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.694 13:51:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80536 00:17:43.953 [2024-10-01 13:51:54.037076] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.329 13:51:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:45.329 00:17:45.329 real 0m11.039s 00:17:45.329 user 0m17.296s 00:17:45.329 sys 0m2.299s 00:17:45.329 13:51:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.329 13:51:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.329 ************************************ 00:17:45.329 END TEST raid5f_state_function_test_sb 00:17:45.329 ************************************ 00:17:45.329 13:51:55 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:45.329 13:51:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:45.329 13:51:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.329 13:51:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.329 ************************************ 00:17:45.329 START TEST raid5f_superblock_test 00:17:45.329 ************************************ 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81162 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81162 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81162 ']' 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.329 13:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.587 [2024-10-01 13:51:55.606469] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:45.587 [2024-10-01 13:51:55.606614] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81162 ] 00:17:45.846 [2024-10-01 13:51:55.783241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.846 [2024-10-01 13:51:56.014345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.105 [2024-10-01 13:51:56.241313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.105 [2024-10-01 13:51:56.241357] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.364 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 malloc1 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 [2024-10-01 13:51:56.566947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.625 [2024-10-01 13:51:56.567022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.625 [2024-10-01 13:51:56.567049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:46.625 [2024-10-01 13:51:56.567066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.625 [2024-10-01 13:51:56.569719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.625 [2024-10-01 13:51:56.569761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.625 pt1 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 malloc2 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 [2024-10-01 13:51:56.630380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.625 [2024-10-01 13:51:56.630462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.625 [2024-10-01 13:51:56.630491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:46.625 [2024-10-01 13:51:56.630520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.625 [2024-10-01 13:51:56.633110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.625 [2024-10-01 13:51:56.633154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.625 pt2 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:46.625 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.626 malloc3 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.626 [2024-10-01 13:51:56.680812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:46.626 [2024-10-01 13:51:56.680870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.626 [2024-10-01 13:51:56.680897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:46.626 [2024-10-01 13:51:56.680909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.626 [2024-10-01 13:51:56.683450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.626 [2024-10-01 13:51:56.683500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:46.626 pt3 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.626 [2024-10-01 13:51:56.688892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.626 [2024-10-01 13:51:56.691067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.626 [2024-10-01 13:51:56.691146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:46.626 [2024-10-01 13:51:56.691317] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:46.626 [2024-10-01 13:51:56.691333] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:46.626 [2024-10-01 13:51:56.691624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:46.626 [2024-10-01 13:51:56.697979] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:46.626 [2024-10-01 13:51:56.698005] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:46.626 [2024-10-01 13:51:56.698210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.626 "name": "raid_bdev1", 00:17:46.626 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:46.626 "strip_size_kb": 64, 00:17:46.626 "state": "online", 00:17:46.626 "raid_level": "raid5f", 00:17:46.626 "superblock": true, 00:17:46.626 "num_base_bdevs": 3, 00:17:46.626 "num_base_bdevs_discovered": 3, 00:17:46.626 "num_base_bdevs_operational": 3, 00:17:46.626 "base_bdevs_list": [ 00:17:46.626 { 00:17:46.626 "name": "pt1", 00:17:46.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.626 "is_configured": true, 00:17:46.626 "data_offset": 2048, 00:17:46.626 "data_size": 63488 00:17:46.626 }, 00:17:46.626 { 00:17:46.626 "name": "pt2", 00:17:46.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.626 "is_configured": true, 00:17:46.626 "data_offset": 2048, 00:17:46.626 "data_size": 63488 00:17:46.626 }, 00:17:46.626 { 00:17:46.626 "name": "pt3", 00:17:46.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:46.626 "is_configured": true, 00:17:46.626 "data_offset": 2048, 00:17:46.626 "data_size": 63488 00:17:46.626 } 00:17:46.626 ] 00:17:46.626 }' 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.626 13:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.194 [2024-10-01 13:51:57.144674] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.194 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.194 "name": "raid_bdev1", 00:17:47.194 "aliases": [ 00:17:47.194 "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064" 00:17:47.194 ], 00:17:47.194 "product_name": "Raid Volume", 00:17:47.194 "block_size": 512, 00:17:47.194 "num_blocks": 126976, 00:17:47.194 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:47.194 "assigned_rate_limits": { 00:17:47.194 "rw_ios_per_sec": 0, 00:17:47.194 "rw_mbytes_per_sec": 0, 00:17:47.194 "r_mbytes_per_sec": 0, 00:17:47.194 "w_mbytes_per_sec": 0 00:17:47.194 }, 00:17:47.194 "claimed": false, 00:17:47.194 "zoned": false, 00:17:47.194 "supported_io_types": { 00:17:47.194 "read": true, 00:17:47.194 "write": true, 00:17:47.194 "unmap": false, 00:17:47.194 "flush": false, 00:17:47.194 "reset": true, 00:17:47.194 "nvme_admin": false, 00:17:47.194 "nvme_io": false, 00:17:47.194 "nvme_io_md": false, 00:17:47.194 "write_zeroes": true, 00:17:47.194 "zcopy": false, 00:17:47.194 "get_zone_info": false, 00:17:47.194 "zone_management": false, 00:17:47.194 "zone_append": false, 00:17:47.194 "compare": false, 00:17:47.194 "compare_and_write": false, 00:17:47.194 "abort": false, 00:17:47.194 "seek_hole": false, 00:17:47.194 "seek_data": false, 00:17:47.195 "copy": false, 00:17:47.195 "nvme_iov_md": false 00:17:47.195 }, 00:17:47.195 "driver_specific": { 00:17:47.195 "raid": { 00:17:47.195 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:47.195 "strip_size_kb": 64, 00:17:47.195 "state": "online", 00:17:47.195 "raid_level": "raid5f", 00:17:47.195 "superblock": true, 00:17:47.195 "num_base_bdevs": 3, 00:17:47.195 "num_base_bdevs_discovered": 3, 00:17:47.195 "num_base_bdevs_operational": 3, 00:17:47.195 "base_bdevs_list": [ 00:17:47.195 { 00:17:47.195 "name": "pt1", 00:17:47.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.195 "is_configured": true, 00:17:47.195 "data_offset": 2048, 00:17:47.195 "data_size": 63488 00:17:47.195 }, 00:17:47.195 { 00:17:47.195 "name": "pt2", 00:17:47.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.195 "is_configured": true, 00:17:47.195 "data_offset": 2048, 00:17:47.195 "data_size": 63488 00:17:47.195 }, 00:17:47.195 { 00:17:47.195 "name": "pt3", 00:17:47.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:47.195 "is_configured": true, 00:17:47.195 "data_offset": 2048, 00:17:47.195 "data_size": 63488 00:17:47.195 } 00:17:47.195 ] 00:17:47.195 } 00:17:47.195 } 00:17:47.195 }' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:47.195 pt2 00:17:47.195 pt3' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.195 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 [2024-10-01 13:51:57.408267] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064 ']' 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 [2024-10-01 13:51:57.451998] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.454 [2024-10-01 13:51:57.452034] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.454 [2024-10-01 13:51:57.452121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.454 [2024-10-01 13:51:57.452199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.454 [2024-10-01 13:51:57.452212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.454 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.455 [2024-10-01 13:51:57.591856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:47.455 [2024-10-01 13:51:57.594163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:47.455 [2024-10-01 13:51:57.594225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:47.455 [2024-10-01 13:51:57.594285] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:47.455 [2024-10-01 13:51:57.594343] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:47.455 [2024-10-01 13:51:57.594367] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:47.455 [2024-10-01 13:51:57.594389] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.455 [2024-10-01 13:51:57.594417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:47.455 request: 00:17:47.455 { 00:17:47.455 "name": "raid_bdev1", 00:17:47.455 "raid_level": "raid5f", 00:17:47.455 "base_bdevs": [ 00:17:47.455 "malloc1", 00:17:47.455 "malloc2", 00:17:47.455 "malloc3" 00:17:47.455 ], 00:17:47.455 "strip_size_kb": 64, 00:17:47.455 "superblock": false, 00:17:47.455 "method": "bdev_raid_create", 00:17:47.455 "req_id": 1 00:17:47.455 } 00:17:47.455 Got JSON-RPC error response 00:17:47.455 response: 00:17:47.455 { 00:17:47.455 "code": -17, 00:17:47.455 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:47.455 } 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:47.455 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.713 [2024-10-01 13:51:57.651727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.713 [2024-10-01 13:51:57.651795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.713 [2024-10-01 13:51:57.651821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:47.713 [2024-10-01 13:51:57.651833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.713 [2024-10-01 13:51:57.654524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.713 [2024-10-01 13:51:57.654567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.713 [2024-10-01 13:51:57.654680] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:47.713 [2024-10-01 13:51:57.654742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.713 pt1 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.713 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.714 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.714 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.714 "name": "raid_bdev1", 00:17:47.714 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:47.714 "strip_size_kb": 64, 00:17:47.714 "state": "configuring", 00:17:47.714 "raid_level": "raid5f", 00:17:47.714 "superblock": true, 00:17:47.714 "num_base_bdevs": 3, 00:17:47.714 "num_base_bdevs_discovered": 1, 00:17:47.714 "num_base_bdevs_operational": 3, 00:17:47.714 "base_bdevs_list": [ 00:17:47.714 { 00:17:47.714 "name": "pt1", 00:17:47.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.714 "is_configured": true, 00:17:47.714 "data_offset": 2048, 00:17:47.714 "data_size": 63488 00:17:47.714 }, 00:17:47.714 { 00:17:47.714 "name": null, 00:17:47.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.714 "is_configured": false, 00:17:47.714 "data_offset": 2048, 00:17:47.714 "data_size": 63488 00:17:47.714 }, 00:17:47.714 { 00:17:47.714 "name": null, 00:17:47.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:47.714 "is_configured": false, 00:17:47.714 "data_offset": 2048, 00:17:47.714 "data_size": 63488 00:17:47.714 } 00:17:47.714 ] 00:17:47.714 }' 00:17:47.714 13:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.714 13:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.972 [2024-10-01 13:51:58.135655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.972 [2024-10-01 13:51:58.135727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.972 [2024-10-01 13:51:58.135753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:47.972 [2024-10-01 13:51:58.135766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.972 [2024-10-01 13:51:58.136278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.972 [2024-10-01 13:51:58.136309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.972 [2024-10-01 13:51:58.136417] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:47.972 [2024-10-01 13:51:58.136445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.972 pt2 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.972 [2024-10-01 13:51:58.147676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.972 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.229 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.229 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.229 "name": "raid_bdev1", 00:17:48.229 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:48.229 "strip_size_kb": 64, 00:17:48.229 "state": "configuring", 00:17:48.229 "raid_level": "raid5f", 00:17:48.229 "superblock": true, 00:17:48.229 "num_base_bdevs": 3, 00:17:48.229 "num_base_bdevs_discovered": 1, 00:17:48.229 "num_base_bdevs_operational": 3, 00:17:48.229 "base_bdevs_list": [ 00:17:48.229 { 00:17:48.229 "name": "pt1", 00:17:48.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.229 "is_configured": true, 00:17:48.229 "data_offset": 2048, 00:17:48.229 "data_size": 63488 00:17:48.229 }, 00:17:48.229 { 00:17:48.229 "name": null, 00:17:48.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.229 "is_configured": false, 00:17:48.229 "data_offset": 0, 00:17:48.229 "data_size": 63488 00:17:48.229 }, 00:17:48.229 { 00:17:48.229 "name": null, 00:17:48.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.229 "is_configured": false, 00:17:48.229 "data_offset": 2048, 00:17:48.229 "data_size": 63488 00:17:48.229 } 00:17:48.229 ] 00:17:48.229 }' 00:17:48.229 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.229 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.488 [2024-10-01 13:51:58.615633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.488 [2024-10-01 13:51:58.615710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.488 [2024-10-01 13:51:58.615732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:48.488 [2024-10-01 13:51:58.615747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.488 [2024-10-01 13:51:58.616226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.488 [2024-10-01 13:51:58.616261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.488 [2024-10-01 13:51:58.616350] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:48.488 [2024-10-01 13:51:58.616382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.488 pt2 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.488 [2024-10-01 13:51:58.623636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:48.488 [2024-10-01 13:51:58.623693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.488 [2024-10-01 13:51:58.623712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:48.488 [2024-10-01 13:51:58.623726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.488 [2024-10-01 13:51:58.624155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.488 [2024-10-01 13:51:58.624190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:48.488 [2024-10-01 13:51:58.624261] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:48.488 [2024-10-01 13:51:58.624286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:48.488 [2024-10-01 13:51:58.624439] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:48.488 [2024-10-01 13:51:58.624467] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:48.488 [2024-10-01 13:51:58.624745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:48.488 [2024-10-01 13:51:58.630638] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:48.488 [2024-10-01 13:51:58.630664] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:48.488 [2024-10-01 13:51:58.630885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.488 pt3 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.488 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.745 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.745 "name": "raid_bdev1", 00:17:48.746 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:48.746 "strip_size_kb": 64, 00:17:48.746 "state": "online", 00:17:48.746 "raid_level": "raid5f", 00:17:48.746 "superblock": true, 00:17:48.746 "num_base_bdevs": 3, 00:17:48.746 "num_base_bdevs_discovered": 3, 00:17:48.746 "num_base_bdevs_operational": 3, 00:17:48.746 "base_bdevs_list": [ 00:17:48.746 { 00:17:48.746 "name": "pt1", 00:17:48.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.746 "is_configured": true, 00:17:48.746 "data_offset": 2048, 00:17:48.746 "data_size": 63488 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "name": "pt2", 00:17:48.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.746 "is_configured": true, 00:17:48.746 "data_offset": 2048, 00:17:48.746 "data_size": 63488 00:17:48.746 }, 00:17:48.746 { 00:17:48.746 "name": "pt3", 00:17:48.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.746 "is_configured": true, 00:17:48.746 "data_offset": 2048, 00:17:48.746 "data_size": 63488 00:17:48.746 } 00:17:48.746 ] 00:17:48.746 }' 00:17:48.746 13:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.746 13:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.004 [2024-10-01 13:51:59.081150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.004 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.004 "name": "raid_bdev1", 00:17:49.004 "aliases": [ 00:17:49.004 "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064" 00:17:49.004 ], 00:17:49.004 "product_name": "Raid Volume", 00:17:49.004 "block_size": 512, 00:17:49.004 "num_blocks": 126976, 00:17:49.004 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:49.004 "assigned_rate_limits": { 00:17:49.004 "rw_ios_per_sec": 0, 00:17:49.004 "rw_mbytes_per_sec": 0, 00:17:49.004 "r_mbytes_per_sec": 0, 00:17:49.004 "w_mbytes_per_sec": 0 00:17:49.004 }, 00:17:49.004 "claimed": false, 00:17:49.004 "zoned": false, 00:17:49.004 "supported_io_types": { 00:17:49.004 "read": true, 00:17:49.004 "write": true, 00:17:49.004 "unmap": false, 00:17:49.004 "flush": false, 00:17:49.004 "reset": true, 00:17:49.004 "nvme_admin": false, 00:17:49.004 "nvme_io": false, 00:17:49.004 "nvme_io_md": false, 00:17:49.004 "write_zeroes": true, 00:17:49.004 "zcopy": false, 00:17:49.004 "get_zone_info": false, 00:17:49.004 "zone_management": false, 00:17:49.004 "zone_append": false, 00:17:49.004 "compare": false, 00:17:49.004 "compare_and_write": false, 00:17:49.004 "abort": false, 00:17:49.004 "seek_hole": false, 00:17:49.004 "seek_data": false, 00:17:49.004 "copy": false, 00:17:49.004 "nvme_iov_md": false 00:17:49.004 }, 00:17:49.004 "driver_specific": { 00:17:49.004 "raid": { 00:17:49.004 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:49.004 "strip_size_kb": 64, 00:17:49.004 "state": "online", 00:17:49.004 "raid_level": "raid5f", 00:17:49.004 "superblock": true, 00:17:49.004 "num_base_bdevs": 3, 00:17:49.004 "num_base_bdevs_discovered": 3, 00:17:49.004 "num_base_bdevs_operational": 3, 00:17:49.004 "base_bdevs_list": [ 00:17:49.004 { 00:17:49.004 "name": "pt1", 00:17:49.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.004 "is_configured": true, 00:17:49.004 "data_offset": 2048, 00:17:49.004 "data_size": 63488 00:17:49.004 }, 00:17:49.004 { 00:17:49.004 "name": "pt2", 00:17:49.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.004 "is_configured": true, 00:17:49.004 "data_offset": 2048, 00:17:49.004 "data_size": 63488 00:17:49.004 }, 00:17:49.004 { 00:17:49.004 "name": "pt3", 00:17:49.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.004 "is_configured": true, 00:17:49.004 "data_offset": 2048, 00:17:49.004 "data_size": 63488 00:17:49.004 } 00:17:49.004 ] 00:17:49.004 } 00:17:49.005 } 00:17:49.005 }' 00:17:49.005 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.005 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:49.005 pt2 00:17:49.005 pt3' 00:17:49.005 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.005 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:49.005 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.263 [2024-10-01 13:51:59.348784] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064 '!=' 7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064 ']' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.263 [2024-10-01 13:51:59.392585] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.263 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.263 "name": "raid_bdev1", 00:17:49.263 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:49.263 "strip_size_kb": 64, 00:17:49.264 "state": "online", 00:17:49.264 "raid_level": "raid5f", 00:17:49.264 "superblock": true, 00:17:49.264 "num_base_bdevs": 3, 00:17:49.264 "num_base_bdevs_discovered": 2, 00:17:49.264 "num_base_bdevs_operational": 2, 00:17:49.264 "base_bdevs_list": [ 00:17:49.264 { 00:17:49.264 "name": null, 00:17:49.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.264 "is_configured": false, 00:17:49.264 "data_offset": 0, 00:17:49.264 "data_size": 63488 00:17:49.264 }, 00:17:49.264 { 00:17:49.264 "name": "pt2", 00:17:49.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.264 "is_configured": true, 00:17:49.264 "data_offset": 2048, 00:17:49.264 "data_size": 63488 00:17:49.264 }, 00:17:49.264 { 00:17:49.264 "name": "pt3", 00:17:49.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.264 "is_configured": true, 00:17:49.264 "data_offset": 2048, 00:17:49.264 "data_size": 63488 00:17:49.264 } 00:17:49.264 ] 00:17:49.264 }' 00:17:49.264 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.264 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 [2024-10-01 13:51:59.855887] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.830 [2024-10-01 13:51:59.855920] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.830 [2024-10-01 13:51:59.856005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.830 [2024-10-01 13:51:59.856068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.830 [2024-10-01 13:51:59.856086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 [2024-10-01 13:51:59.939715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.830 [2024-10-01 13:51:59.939782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.830 [2024-10-01 13:51:59.939803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:49.830 [2024-10-01 13:51:59.939817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.830 [2024-10-01 13:51:59.942511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.830 [2024-10-01 13:51:59.942555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.830 [2024-10-01 13:51:59.942646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:49.830 [2024-10-01 13:51:59.942708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.830 pt2 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.830 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.830 "name": "raid_bdev1", 00:17:49.830 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:49.830 "strip_size_kb": 64, 00:17:49.830 "state": "configuring", 00:17:49.830 "raid_level": "raid5f", 00:17:49.830 "superblock": true, 00:17:49.830 "num_base_bdevs": 3, 00:17:49.830 "num_base_bdevs_discovered": 1, 00:17:49.831 "num_base_bdevs_operational": 2, 00:17:49.831 "base_bdevs_list": [ 00:17:49.831 { 00:17:49.831 "name": null, 00:17:49.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.831 "is_configured": false, 00:17:49.831 "data_offset": 2048, 00:17:49.831 "data_size": 63488 00:17:49.831 }, 00:17:49.831 { 00:17:49.831 "name": "pt2", 00:17:49.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.831 "is_configured": true, 00:17:49.831 "data_offset": 2048, 00:17:49.831 "data_size": 63488 00:17:49.831 }, 00:17:49.831 { 00:17:49.831 "name": null, 00:17:49.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.831 "is_configured": false, 00:17:49.831 "data_offset": 2048, 00:17:49.831 "data_size": 63488 00:17:49.831 } 00:17:49.831 ] 00:17:49.831 }' 00:17:49.831 13:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.831 13:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.398 [2024-10-01 13:52:00.407624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:50.398 [2024-10-01 13:52:00.407698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.398 [2024-10-01 13:52:00.407742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:50.398 [2024-10-01 13:52:00.407758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.398 [2024-10-01 13:52:00.408276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.398 [2024-10-01 13:52:00.408308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:50.398 [2024-10-01 13:52:00.408423] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:50.398 [2024-10-01 13:52:00.408468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:50.398 [2024-10-01 13:52:00.408597] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:50.398 [2024-10-01 13:52:00.408617] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:50.398 [2024-10-01 13:52:00.408875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:50.398 [2024-10-01 13:52:00.414597] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:50.398 [2024-10-01 13:52:00.414625] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:50.398 [2024-10-01 13:52:00.414953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.398 pt3 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.398 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.399 "name": "raid_bdev1", 00:17:50.399 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:50.399 "strip_size_kb": 64, 00:17:50.399 "state": "online", 00:17:50.399 "raid_level": "raid5f", 00:17:50.399 "superblock": true, 00:17:50.399 "num_base_bdevs": 3, 00:17:50.399 "num_base_bdevs_discovered": 2, 00:17:50.399 "num_base_bdevs_operational": 2, 00:17:50.399 "base_bdevs_list": [ 00:17:50.399 { 00:17:50.399 "name": null, 00:17:50.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.399 "is_configured": false, 00:17:50.399 "data_offset": 2048, 00:17:50.399 "data_size": 63488 00:17:50.399 }, 00:17:50.399 { 00:17:50.399 "name": "pt2", 00:17:50.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.399 "is_configured": true, 00:17:50.399 "data_offset": 2048, 00:17:50.399 "data_size": 63488 00:17:50.399 }, 00:17:50.399 { 00:17:50.399 "name": "pt3", 00:17:50.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.399 "is_configured": true, 00:17:50.399 "data_offset": 2048, 00:17:50.399 "data_size": 63488 00:17:50.399 } 00:17:50.399 ] 00:17:50.399 }' 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.399 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.967 [2024-10-01 13:52:00.869861] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.967 [2024-10-01 13:52:00.869903] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.967 [2024-10-01 13:52:00.870014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.967 [2024-10-01 13:52:00.870082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.967 [2024-10-01 13:52:00.870095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.967 [2024-10-01 13:52:00.941807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.967 [2024-10-01 13:52:00.941887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.967 [2024-10-01 13:52:00.941915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:50.967 [2024-10-01 13:52:00.941928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.967 [2024-10-01 13:52:00.944904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.967 [2024-10-01 13:52:00.944953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.967 [2024-10-01 13:52:00.945085] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:50.967 [2024-10-01 13:52:00.945140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.967 [2024-10-01 13:52:00.945297] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:50.967 [2024-10-01 13:52:00.945314] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.967 [2024-10-01 13:52:00.945337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:50.967 [2024-10-01 13:52:00.945442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.967 pt1 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.967 13:52:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.967 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.967 "name": "raid_bdev1", 00:17:50.967 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:50.967 "strip_size_kb": 64, 00:17:50.967 "state": "configuring", 00:17:50.967 "raid_level": "raid5f", 00:17:50.967 "superblock": true, 00:17:50.967 "num_base_bdevs": 3, 00:17:50.967 "num_base_bdevs_discovered": 1, 00:17:50.967 "num_base_bdevs_operational": 2, 00:17:50.967 "base_bdevs_list": [ 00:17:50.967 { 00:17:50.967 "name": null, 00:17:50.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.967 "is_configured": false, 00:17:50.967 "data_offset": 2048, 00:17:50.967 "data_size": 63488 00:17:50.967 }, 00:17:50.967 { 00:17:50.967 "name": "pt2", 00:17:50.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.968 "is_configured": true, 00:17:50.968 "data_offset": 2048, 00:17:50.968 "data_size": 63488 00:17:50.968 }, 00:17:50.968 { 00:17:50.968 "name": null, 00:17:50.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.968 "is_configured": false, 00:17:50.968 "data_offset": 2048, 00:17:50.968 "data_size": 63488 00:17:50.968 } 00:17:50.968 ] 00:17:50.968 }' 00:17:50.968 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.968 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.226 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:51.226 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:51.226 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.226 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 [2024-10-01 13:52:01.461139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:51.485 [2024-10-01 13:52:01.461222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.485 [2024-10-01 13:52:01.461250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:51.485 [2024-10-01 13:52:01.461264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.485 [2024-10-01 13:52:01.461868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.485 [2024-10-01 13:52:01.461901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:51.485 [2024-10-01 13:52:01.462002] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:51.485 [2024-10-01 13:52:01.462038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:51.485 [2024-10-01 13:52:01.462181] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:51.485 [2024-10-01 13:52:01.462200] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:51.485 [2024-10-01 13:52:01.462534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:51.485 [2024-10-01 13:52:01.469391] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:51.485 [2024-10-01 13:52:01.469438] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:51.485 [2024-10-01 13:52:01.469768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.485 pt3 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.485 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.486 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.486 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.486 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.486 "name": "raid_bdev1", 00:17:51.486 "uuid": "7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064", 00:17:51.486 "strip_size_kb": 64, 00:17:51.486 "state": "online", 00:17:51.486 "raid_level": "raid5f", 00:17:51.486 "superblock": true, 00:17:51.486 "num_base_bdevs": 3, 00:17:51.486 "num_base_bdevs_discovered": 2, 00:17:51.486 "num_base_bdevs_operational": 2, 00:17:51.486 "base_bdevs_list": [ 00:17:51.486 { 00:17:51.486 "name": null, 00:17:51.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.486 "is_configured": false, 00:17:51.486 "data_offset": 2048, 00:17:51.486 "data_size": 63488 00:17:51.486 }, 00:17:51.486 { 00:17:51.486 "name": "pt2", 00:17:51.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.486 "is_configured": true, 00:17:51.486 "data_offset": 2048, 00:17:51.486 "data_size": 63488 00:17:51.486 }, 00:17:51.486 { 00:17:51.486 "name": "pt3", 00:17:51.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.486 "is_configured": true, 00:17:51.486 "data_offset": 2048, 00:17:51.486 "data_size": 63488 00:17:51.486 } 00:17:51.486 ] 00:17:51.486 }' 00:17:51.486 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.486 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.745 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:51.745 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:51.745 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.745 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.005 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.005 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:52.005 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:52.005 13:52:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.005 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.005 13:52:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.005 [2024-10-01 13:52:01.976947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064 '!=' 7255a0d1-ffc4-46f0-b3ed-acc5ad0bb064 ']' 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81162 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81162 ']' 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81162 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81162 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.005 killing process with pid 81162 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81162' 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81162 00:17:52.005 [2024-10-01 13:52:02.050658] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.005 13:52:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81162 00:17:52.005 [2024-10-01 13:52:02.050782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.005 [2024-10-01 13:52:02.050852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.005 [2024-10-01 13:52:02.050868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:52.263 [2024-10-01 13:52:02.366769] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.638 13:52:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:53.638 00:17:53.638 real 0m8.188s 00:17:53.638 user 0m12.726s 00:17:53.638 sys 0m1.678s 00:17:53.638 13:52:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:53.638 13:52:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.638 ************************************ 00:17:53.638 END TEST raid5f_superblock_test 00:17:53.638 ************************************ 00:17:53.638 13:52:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:53.638 13:52:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:53.638 13:52:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:53.638 13:52:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:53.638 13:52:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.638 ************************************ 00:17:53.638 START TEST raid5f_rebuild_test 00:17:53.638 ************************************ 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:53.638 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81606 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81606 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81606 ']' 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.639 13:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:53.897 Zero copy mechanism will not be used. 00:17:53.897 [2024-10-01 13:52:03.875193] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:17:53.897 [2024-10-01 13:52:03.875322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81606 ] 00:17:53.897 [2024-10-01 13:52:04.048755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.156 [2024-10-01 13:52:04.275491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.415 [2024-10-01 13:52:04.490626] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.415 [2024-10-01 13:52:04.490666] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.674 BaseBdev1_malloc 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.674 [2024-10-01 13:52:04.789551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:54.674 [2024-10-01 13:52:04.789637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.674 [2024-10-01 13:52:04.789675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:54.674 [2024-10-01 13:52:04.789693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.674 [2024-10-01 13:52:04.792117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.674 [2024-10-01 13:52:04.792163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:54.674 BaseBdev1 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.674 BaseBdev2_malloc 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.674 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.674 [2024-10-01 13:52:04.862387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:54.674 [2024-10-01 13:52:04.862476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.674 [2024-10-01 13:52:04.862500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:54.674 [2024-10-01 13:52:04.862514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.674 [2024-10-01 13:52:04.865034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.674 [2024-10-01 13:52:04.865079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:54.958 BaseBdev2 00:17:54.958 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.958 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:54.958 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:54.958 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 BaseBdev3_malloc 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 [2024-10-01 13:52:04.920188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:54.959 [2024-10-01 13:52:04.920260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.959 [2024-10-01 13:52:04.920302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:54.959 [2024-10-01 13:52:04.920318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.959 [2024-10-01 13:52:04.922745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.959 [2024-10-01 13:52:04.922792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:54.959 BaseBdev3 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 spare_malloc 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 spare_delay 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 [2024-10-01 13:52:04.988610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:54.959 [2024-10-01 13:52:04.988680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.959 [2024-10-01 13:52:04.988704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:54.959 [2024-10-01 13:52:04.988719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.959 [2024-10-01 13:52:04.991157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.959 [2024-10-01 13:52:04.991208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:54.959 spare 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 13:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 [2024-10-01 13:52:05.000666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.959 [2024-10-01 13:52:05.002907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.959 [2024-10-01 13:52:05.002982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:54.959 [2024-10-01 13:52:05.003085] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:54.959 [2024-10-01 13:52:05.003097] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:54.959 [2024-10-01 13:52:05.003433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:54.959 [2024-10-01 13:52:05.009237] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:54.959 [2024-10-01 13:52:05.009266] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:54.959 [2024-10-01 13:52:05.009529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.959 "name": "raid_bdev1", 00:17:54.959 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:17:54.959 "strip_size_kb": 64, 00:17:54.959 "state": "online", 00:17:54.959 "raid_level": "raid5f", 00:17:54.959 "superblock": false, 00:17:54.959 "num_base_bdevs": 3, 00:17:54.959 "num_base_bdevs_discovered": 3, 00:17:54.959 "num_base_bdevs_operational": 3, 00:17:54.959 "base_bdevs_list": [ 00:17:54.959 { 00:17:54.959 "name": "BaseBdev1", 00:17:54.959 "uuid": "93a57568-827a-5ddf-ae15-507c908fb8d5", 00:17:54.959 "is_configured": true, 00:17:54.959 "data_offset": 0, 00:17:54.959 "data_size": 65536 00:17:54.959 }, 00:17:54.959 { 00:17:54.959 "name": "BaseBdev2", 00:17:54.959 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:17:54.959 "is_configured": true, 00:17:54.959 "data_offset": 0, 00:17:54.959 "data_size": 65536 00:17:54.959 }, 00:17:54.959 { 00:17:54.959 "name": "BaseBdev3", 00:17:54.959 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:17:54.959 "is_configured": true, 00:17:54.959 "data_offset": 0, 00:17:54.959 "data_size": 65536 00:17:54.959 } 00:17:54.959 ] 00:17:54.959 }' 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.959 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.547 [2024-10-01 13:52:05.439883] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:55.547 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:55.547 [2024-10-01 13:52:05.731743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:55.807 /dev/nbd0 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:55.807 1+0 records in 00:17:55.807 1+0 records out 00:17:55.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397873 s, 10.3 MB/s 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:55.807 13:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:56.066 512+0 records in 00:17:56.066 512+0 records out 00:17:56.066 67108864 bytes (67 MB, 64 MiB) copied, 0.422857 s, 159 MB/s 00:17:56.066 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:56.066 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.066 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:56.066 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:56.066 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:56.066 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.066 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:56.324 [2024-10-01 13:52:06.460722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.324 [2024-10-01 13:52:06.476530] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.324 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.583 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.583 "name": "raid_bdev1", 00:17:56.583 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:17:56.583 "strip_size_kb": 64, 00:17:56.583 "state": "online", 00:17:56.583 "raid_level": "raid5f", 00:17:56.583 "superblock": false, 00:17:56.583 "num_base_bdevs": 3, 00:17:56.583 "num_base_bdevs_discovered": 2, 00:17:56.583 "num_base_bdevs_operational": 2, 00:17:56.583 "base_bdevs_list": [ 00:17:56.583 { 00:17:56.583 "name": null, 00:17:56.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.583 "is_configured": false, 00:17:56.583 "data_offset": 0, 00:17:56.583 "data_size": 65536 00:17:56.583 }, 00:17:56.583 { 00:17:56.583 "name": "BaseBdev2", 00:17:56.583 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:17:56.583 "is_configured": true, 00:17:56.583 "data_offset": 0, 00:17:56.583 "data_size": 65536 00:17:56.583 }, 00:17:56.583 { 00:17:56.583 "name": "BaseBdev3", 00:17:56.583 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:17:56.584 "is_configured": true, 00:17:56.584 "data_offset": 0, 00:17:56.584 "data_size": 65536 00:17:56.584 } 00:17:56.584 ] 00:17:56.584 }' 00:17:56.584 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.584 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.842 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.842 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.842 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.842 [2024-10-01 13:52:06.931938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.842 [2024-10-01 13:52:06.950576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:56.842 13:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.842 13:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:56.842 [2024-10-01 13:52:06.959994] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.807 13:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.083 13:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.083 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.083 "name": "raid_bdev1", 00:17:58.083 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:17:58.083 "strip_size_kb": 64, 00:17:58.083 "state": "online", 00:17:58.083 "raid_level": "raid5f", 00:17:58.083 "superblock": false, 00:17:58.083 "num_base_bdevs": 3, 00:17:58.083 "num_base_bdevs_discovered": 3, 00:17:58.083 "num_base_bdevs_operational": 3, 00:17:58.083 "process": { 00:17:58.083 "type": "rebuild", 00:17:58.083 "target": "spare", 00:17:58.083 "progress": { 00:17:58.083 "blocks": 20480, 00:17:58.083 "percent": 15 00:17:58.083 } 00:17:58.083 }, 00:17:58.083 "base_bdevs_list": [ 00:17:58.083 { 00:17:58.083 "name": "spare", 00:17:58.083 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:17:58.083 "is_configured": true, 00:17:58.083 "data_offset": 0, 00:17:58.083 "data_size": 65536 00:17:58.083 }, 00:17:58.083 { 00:17:58.083 "name": "BaseBdev2", 00:17:58.083 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:17:58.083 "is_configured": true, 00:17:58.083 "data_offset": 0, 00:17:58.083 "data_size": 65536 00:17:58.083 }, 00:17:58.083 { 00:17:58.083 "name": "BaseBdev3", 00:17:58.083 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:17:58.083 "is_configured": true, 00:17:58.083 "data_offset": 0, 00:17:58.083 "data_size": 65536 00:17:58.084 } 00:17:58.084 ] 00:17:58.084 }' 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.084 [2024-10-01 13:52:08.075732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.084 [2024-10-01 13:52:08.171684] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:58.084 [2024-10-01 13:52:08.171984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.084 [2024-10-01 13:52:08.172119] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.084 [2024-10-01 13:52:08.172168] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.084 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.084 "name": "raid_bdev1", 00:17:58.084 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:17:58.084 "strip_size_kb": 64, 00:17:58.084 "state": "online", 00:17:58.084 "raid_level": "raid5f", 00:17:58.084 "superblock": false, 00:17:58.084 "num_base_bdevs": 3, 00:17:58.084 "num_base_bdevs_discovered": 2, 00:17:58.084 "num_base_bdevs_operational": 2, 00:17:58.084 "base_bdevs_list": [ 00:17:58.084 { 00:17:58.084 "name": null, 00:17:58.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.084 "is_configured": false, 00:17:58.084 "data_offset": 0, 00:17:58.084 "data_size": 65536 00:17:58.084 }, 00:17:58.084 { 00:17:58.084 "name": "BaseBdev2", 00:17:58.084 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:17:58.084 "is_configured": true, 00:17:58.084 "data_offset": 0, 00:17:58.084 "data_size": 65536 00:17:58.084 }, 00:17:58.084 { 00:17:58.084 "name": "BaseBdev3", 00:17:58.084 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:17:58.084 "is_configured": true, 00:17:58.084 "data_offset": 0, 00:17:58.084 "data_size": 65536 00:17:58.084 } 00:17:58.084 ] 00:17:58.084 }' 00:17:58.342 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.343 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.602 "name": "raid_bdev1", 00:17:58.602 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:17:58.602 "strip_size_kb": 64, 00:17:58.602 "state": "online", 00:17:58.602 "raid_level": "raid5f", 00:17:58.602 "superblock": false, 00:17:58.602 "num_base_bdevs": 3, 00:17:58.602 "num_base_bdevs_discovered": 2, 00:17:58.602 "num_base_bdevs_operational": 2, 00:17:58.602 "base_bdevs_list": [ 00:17:58.602 { 00:17:58.602 "name": null, 00:17:58.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.602 "is_configured": false, 00:17:58.602 "data_offset": 0, 00:17:58.602 "data_size": 65536 00:17:58.602 }, 00:17:58.602 { 00:17:58.602 "name": "BaseBdev2", 00:17:58.602 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:17:58.602 "is_configured": true, 00:17:58.602 "data_offset": 0, 00:17:58.602 "data_size": 65536 00:17:58.602 }, 00:17:58.602 { 00:17:58.602 "name": "BaseBdev3", 00:17:58.602 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:17:58.602 "is_configured": true, 00:17:58.602 "data_offset": 0, 00:17:58.602 "data_size": 65536 00:17:58.602 } 00:17:58.602 ] 00:17:58.602 }' 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.602 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.861 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.861 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.861 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.861 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.861 [2024-10-01 13:52:08.837315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.861 [2024-10-01 13:52:08.854279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:58.861 13:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.861 13:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:58.861 [2024-10-01 13:52:08.863146] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.806 "name": "raid_bdev1", 00:17:59.806 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:17:59.806 "strip_size_kb": 64, 00:17:59.806 "state": "online", 00:17:59.806 "raid_level": "raid5f", 00:17:59.806 "superblock": false, 00:17:59.806 "num_base_bdevs": 3, 00:17:59.806 "num_base_bdevs_discovered": 3, 00:17:59.806 "num_base_bdevs_operational": 3, 00:17:59.806 "process": { 00:17:59.806 "type": "rebuild", 00:17:59.806 "target": "spare", 00:17:59.806 "progress": { 00:17:59.806 "blocks": 20480, 00:17:59.806 "percent": 15 00:17:59.806 } 00:17:59.806 }, 00:17:59.806 "base_bdevs_list": [ 00:17:59.806 { 00:17:59.806 "name": "spare", 00:17:59.806 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:17:59.806 "is_configured": true, 00:17:59.806 "data_offset": 0, 00:17:59.806 "data_size": 65536 00:17:59.806 }, 00:17:59.806 { 00:17:59.806 "name": "BaseBdev2", 00:17:59.806 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:17:59.806 "is_configured": true, 00:17:59.806 "data_offset": 0, 00:17:59.806 "data_size": 65536 00:17:59.806 }, 00:17:59.806 { 00:17:59.806 "name": "BaseBdev3", 00:17:59.806 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:17:59.806 "is_configured": true, 00:17:59.806 "data_offset": 0, 00:17:59.806 "data_size": 65536 00:17:59.806 } 00:17:59.806 ] 00:17:59.806 }' 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.806 13:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=565 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.065 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.065 "name": "raid_bdev1", 00:18:00.065 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:00.065 "strip_size_kb": 64, 00:18:00.065 "state": "online", 00:18:00.065 "raid_level": "raid5f", 00:18:00.065 "superblock": false, 00:18:00.065 "num_base_bdevs": 3, 00:18:00.066 "num_base_bdevs_discovered": 3, 00:18:00.066 "num_base_bdevs_operational": 3, 00:18:00.066 "process": { 00:18:00.066 "type": "rebuild", 00:18:00.066 "target": "spare", 00:18:00.066 "progress": { 00:18:00.066 "blocks": 22528, 00:18:00.066 "percent": 17 00:18:00.066 } 00:18:00.066 }, 00:18:00.066 "base_bdevs_list": [ 00:18:00.066 { 00:18:00.066 "name": "spare", 00:18:00.066 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:00.066 "is_configured": true, 00:18:00.066 "data_offset": 0, 00:18:00.066 "data_size": 65536 00:18:00.066 }, 00:18:00.066 { 00:18:00.066 "name": "BaseBdev2", 00:18:00.066 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:00.066 "is_configured": true, 00:18:00.066 "data_offset": 0, 00:18:00.066 "data_size": 65536 00:18:00.066 }, 00:18:00.066 { 00:18:00.066 "name": "BaseBdev3", 00:18:00.066 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:00.066 "is_configured": true, 00:18:00.066 "data_offset": 0, 00:18:00.066 "data_size": 65536 00:18:00.066 } 00:18:00.066 ] 00:18:00.066 }' 00:18:00.066 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.066 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.066 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.066 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.066 13:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.001 13:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.258 13:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.258 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.258 "name": "raid_bdev1", 00:18:01.258 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:01.258 "strip_size_kb": 64, 00:18:01.258 "state": "online", 00:18:01.258 "raid_level": "raid5f", 00:18:01.258 "superblock": false, 00:18:01.258 "num_base_bdevs": 3, 00:18:01.258 "num_base_bdevs_discovered": 3, 00:18:01.258 "num_base_bdevs_operational": 3, 00:18:01.258 "process": { 00:18:01.258 "type": "rebuild", 00:18:01.258 "target": "spare", 00:18:01.258 "progress": { 00:18:01.258 "blocks": 45056, 00:18:01.258 "percent": 34 00:18:01.258 } 00:18:01.258 }, 00:18:01.258 "base_bdevs_list": [ 00:18:01.258 { 00:18:01.258 "name": "spare", 00:18:01.258 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:01.258 "is_configured": true, 00:18:01.258 "data_offset": 0, 00:18:01.258 "data_size": 65536 00:18:01.258 }, 00:18:01.258 { 00:18:01.258 "name": "BaseBdev2", 00:18:01.258 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:01.258 "is_configured": true, 00:18:01.258 "data_offset": 0, 00:18:01.258 "data_size": 65536 00:18:01.258 }, 00:18:01.258 { 00:18:01.258 "name": "BaseBdev3", 00:18:01.258 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:01.258 "is_configured": true, 00:18:01.258 "data_offset": 0, 00:18:01.258 "data_size": 65536 00:18:01.258 } 00:18:01.258 ] 00:18:01.258 }' 00:18:01.258 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.258 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.258 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.258 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.258 13:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.192 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.192 "name": "raid_bdev1", 00:18:02.192 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:02.192 "strip_size_kb": 64, 00:18:02.192 "state": "online", 00:18:02.192 "raid_level": "raid5f", 00:18:02.192 "superblock": false, 00:18:02.192 "num_base_bdevs": 3, 00:18:02.192 "num_base_bdevs_discovered": 3, 00:18:02.192 "num_base_bdevs_operational": 3, 00:18:02.192 "process": { 00:18:02.192 "type": "rebuild", 00:18:02.192 "target": "spare", 00:18:02.192 "progress": { 00:18:02.192 "blocks": 69632, 00:18:02.192 "percent": 53 00:18:02.192 } 00:18:02.192 }, 00:18:02.192 "base_bdevs_list": [ 00:18:02.192 { 00:18:02.192 "name": "spare", 00:18:02.192 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:02.192 "is_configured": true, 00:18:02.192 "data_offset": 0, 00:18:02.192 "data_size": 65536 00:18:02.192 }, 00:18:02.192 { 00:18:02.192 "name": "BaseBdev2", 00:18:02.192 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:02.192 "is_configured": true, 00:18:02.192 "data_offset": 0, 00:18:02.192 "data_size": 65536 00:18:02.192 }, 00:18:02.192 { 00:18:02.192 "name": "BaseBdev3", 00:18:02.192 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:02.192 "is_configured": true, 00:18:02.192 "data_offset": 0, 00:18:02.192 "data_size": 65536 00:18:02.192 } 00:18:02.192 ] 00:18:02.192 }' 00:18:02.450 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.450 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.450 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.450 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.450 13:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.384 "name": "raid_bdev1", 00:18:03.384 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:03.384 "strip_size_kb": 64, 00:18:03.384 "state": "online", 00:18:03.384 "raid_level": "raid5f", 00:18:03.384 "superblock": false, 00:18:03.384 "num_base_bdevs": 3, 00:18:03.384 "num_base_bdevs_discovered": 3, 00:18:03.384 "num_base_bdevs_operational": 3, 00:18:03.384 "process": { 00:18:03.384 "type": "rebuild", 00:18:03.384 "target": "spare", 00:18:03.384 "progress": { 00:18:03.384 "blocks": 92160, 00:18:03.384 "percent": 70 00:18:03.384 } 00:18:03.384 }, 00:18:03.384 "base_bdevs_list": [ 00:18:03.384 { 00:18:03.384 "name": "spare", 00:18:03.384 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:03.384 "is_configured": true, 00:18:03.384 "data_offset": 0, 00:18:03.384 "data_size": 65536 00:18:03.384 }, 00:18:03.384 { 00:18:03.384 "name": "BaseBdev2", 00:18:03.384 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:03.384 "is_configured": true, 00:18:03.384 "data_offset": 0, 00:18:03.384 "data_size": 65536 00:18:03.384 }, 00:18:03.384 { 00:18:03.384 "name": "BaseBdev3", 00:18:03.384 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:03.384 "is_configured": true, 00:18:03.384 "data_offset": 0, 00:18:03.384 "data_size": 65536 00:18:03.384 } 00:18:03.384 ] 00:18:03.384 }' 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.384 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.642 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.642 13:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.577 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.578 "name": "raid_bdev1", 00:18:04.578 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:04.578 "strip_size_kb": 64, 00:18:04.578 "state": "online", 00:18:04.578 "raid_level": "raid5f", 00:18:04.578 "superblock": false, 00:18:04.578 "num_base_bdevs": 3, 00:18:04.578 "num_base_bdevs_discovered": 3, 00:18:04.578 "num_base_bdevs_operational": 3, 00:18:04.578 "process": { 00:18:04.578 "type": "rebuild", 00:18:04.578 "target": "spare", 00:18:04.578 "progress": { 00:18:04.578 "blocks": 116736, 00:18:04.578 "percent": 89 00:18:04.578 } 00:18:04.578 }, 00:18:04.578 "base_bdevs_list": [ 00:18:04.578 { 00:18:04.578 "name": "spare", 00:18:04.578 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:04.578 "is_configured": true, 00:18:04.578 "data_offset": 0, 00:18:04.578 "data_size": 65536 00:18:04.578 }, 00:18:04.578 { 00:18:04.578 "name": "BaseBdev2", 00:18:04.578 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:04.578 "is_configured": true, 00:18:04.578 "data_offset": 0, 00:18:04.578 "data_size": 65536 00:18:04.578 }, 00:18:04.578 { 00:18:04.578 "name": "BaseBdev3", 00:18:04.578 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:04.578 "is_configured": true, 00:18:04.578 "data_offset": 0, 00:18:04.578 "data_size": 65536 00:18:04.578 } 00:18:04.578 ] 00:18:04.578 }' 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.578 13:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:05.145 [2024-10-01 13:52:15.323880] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:05.145 [2024-10-01 13:52:15.324000] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:05.145 [2024-10-01 13:52:15.324057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.711 "name": "raid_bdev1", 00:18:05.711 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:05.711 "strip_size_kb": 64, 00:18:05.711 "state": "online", 00:18:05.711 "raid_level": "raid5f", 00:18:05.711 "superblock": false, 00:18:05.711 "num_base_bdevs": 3, 00:18:05.711 "num_base_bdevs_discovered": 3, 00:18:05.711 "num_base_bdevs_operational": 3, 00:18:05.711 "base_bdevs_list": [ 00:18:05.711 { 00:18:05.711 "name": "spare", 00:18:05.711 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:05.711 "is_configured": true, 00:18:05.711 "data_offset": 0, 00:18:05.711 "data_size": 65536 00:18:05.711 }, 00:18:05.711 { 00:18:05.711 "name": "BaseBdev2", 00:18:05.711 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:05.711 "is_configured": true, 00:18:05.711 "data_offset": 0, 00:18:05.711 "data_size": 65536 00:18:05.711 }, 00:18:05.711 { 00:18:05.711 "name": "BaseBdev3", 00:18:05.711 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:05.711 "is_configured": true, 00:18:05.711 "data_offset": 0, 00:18:05.711 "data_size": 65536 00:18:05.711 } 00:18:05.711 ] 00:18:05.711 }' 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:05.711 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.969 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:05.969 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:05.969 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.969 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.969 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.970 "name": "raid_bdev1", 00:18:05.970 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:05.970 "strip_size_kb": 64, 00:18:05.970 "state": "online", 00:18:05.970 "raid_level": "raid5f", 00:18:05.970 "superblock": false, 00:18:05.970 "num_base_bdevs": 3, 00:18:05.970 "num_base_bdevs_discovered": 3, 00:18:05.970 "num_base_bdevs_operational": 3, 00:18:05.970 "base_bdevs_list": [ 00:18:05.970 { 00:18:05.970 "name": "spare", 00:18:05.970 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:05.970 "is_configured": true, 00:18:05.970 "data_offset": 0, 00:18:05.970 "data_size": 65536 00:18:05.970 }, 00:18:05.970 { 00:18:05.970 "name": "BaseBdev2", 00:18:05.970 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:05.970 "is_configured": true, 00:18:05.970 "data_offset": 0, 00:18:05.970 "data_size": 65536 00:18:05.970 }, 00:18:05.970 { 00:18:05.970 "name": "BaseBdev3", 00:18:05.970 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:05.970 "is_configured": true, 00:18:05.970 "data_offset": 0, 00:18:05.970 "data_size": 65536 00:18:05.970 } 00:18:05.970 ] 00:18:05.970 }' 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.970 13:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.970 "name": "raid_bdev1", 00:18:05.970 "uuid": "8e61f6ac-66c2-4e66-9f4a-4f277a0cb896", 00:18:05.970 "strip_size_kb": 64, 00:18:05.970 "state": "online", 00:18:05.970 "raid_level": "raid5f", 00:18:05.970 "superblock": false, 00:18:05.970 "num_base_bdevs": 3, 00:18:05.970 "num_base_bdevs_discovered": 3, 00:18:05.970 "num_base_bdevs_operational": 3, 00:18:05.970 "base_bdevs_list": [ 00:18:05.970 { 00:18:05.970 "name": "spare", 00:18:05.970 "uuid": "ab867123-6fc2-55e1-8f02-81336fe5d060", 00:18:05.970 "is_configured": true, 00:18:05.970 "data_offset": 0, 00:18:05.970 "data_size": 65536 00:18:05.970 }, 00:18:05.970 { 00:18:05.970 "name": "BaseBdev2", 00:18:05.970 "uuid": "01c05360-f252-51c8-9504-b98f397cd661", 00:18:05.970 "is_configured": true, 00:18:05.970 "data_offset": 0, 00:18:05.970 "data_size": 65536 00:18:05.970 }, 00:18:05.970 { 00:18:05.970 "name": "BaseBdev3", 00:18:05.970 "uuid": "87baef17-aa4a-5abe-be9f-fae179a79fd8", 00:18:05.970 "is_configured": true, 00:18:05.970 "data_offset": 0, 00:18:05.970 "data_size": 65536 00:18:05.970 } 00:18:05.970 ] 00:18:05.970 }' 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.970 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.536 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.537 [2024-10-01 13:52:16.504984] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.537 [2024-10-01 13:52:16.505023] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.537 [2024-10-01 13:52:16.505119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.537 [2024-10-01 13:52:16.505218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.537 [2024-10-01 13:52:16.505248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:06.537 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:06.794 /dev/nbd0 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:06.794 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.795 1+0 records in 00:18:06.795 1+0 records out 00:18:06.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806242 s, 5.1 MB/s 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:06.795 13:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:07.053 /dev/nbd1 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.053 1+0 records in 00:18:07.053 1+0 records out 00:18:07.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515094 s, 8.0 MB/s 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:07.053 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:07.311 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:07.311 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.311 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:07.311 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.311 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:07.311 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.311 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.570 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81606 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81606 ']' 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81606 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:18:07.828 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.829 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81606 00:18:07.829 killing process with pid 81606 00:18:07.829 Received shutdown signal, test time was about 60.000000 seconds 00:18:07.829 00:18:07.829 Latency(us) 00:18:07.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.829 =================================================================================================================== 00:18:07.829 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.829 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.829 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.829 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81606' 00:18:07.829 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81606 00:18:07.829 [2024-10-01 13:52:17.905879] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.829 13:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81606 00:18:08.398 [2024-10-01 13:52:18.340378] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.811 13:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:09.811 00:18:09.811 real 0m15.973s 00:18:09.811 user 0m19.468s 00:18:09.812 sys 0m2.391s 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.812 ************************************ 00:18:09.812 END TEST raid5f_rebuild_test 00:18:09.812 ************************************ 00:18:09.812 13:52:19 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:09.812 13:52:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:09.812 13:52:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.812 13:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.812 ************************************ 00:18:09.812 START TEST raid5f_rebuild_test_sb 00:18:09.812 ************************************ 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82056 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82056 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82056 ']' 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.812 13:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.812 [2024-10-01 13:52:19.917325] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:09.812 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:09.812 Zero copy mechanism will not be used. 00:18:09.812 [2024-10-01 13:52:19.917724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82056 ] 00:18:10.072 [2024-10-01 13:52:20.093839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.330 [2024-10-01 13:52:20.330446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.590 [2024-10-01 13:52:20.558246] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.590 [2024-10-01 13:52:20.558285] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.849 BaseBdev1_malloc 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.849 [2024-10-01 13:52:20.867120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.849 [2024-10-01 13:52:20.868097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.849 [2024-10-01 13:52:20.868137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.849 [2024-10-01 13:52:20.868157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.849 [2024-10-01 13:52:20.870612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.849 [2024-10-01 13:52:20.870653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.849 BaseBdev1 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.849 BaseBdev2_malloc 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.849 [2024-10-01 13:52:20.935440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:10.849 [2024-10-01 13:52:20.935513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.849 [2024-10-01 13:52:20.935534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:10.849 [2024-10-01 13:52:20.935567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.849 [2024-10-01 13:52:20.937966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.849 [2024-10-01 13:52:20.938011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:10.849 BaseBdev2 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.849 BaseBdev3_malloc 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.849 [2024-10-01 13:52:20.992628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:10.849 [2024-10-01 13:52:20.992700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.849 [2024-10-01 13:52:20.992724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:10.849 [2024-10-01 13:52:20.992739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.849 [2024-10-01 13:52:20.995202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.849 [2024-10-01 13:52:20.995246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:10.849 BaseBdev3 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.849 13:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.109 spare_malloc 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.109 spare_delay 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.109 [2024-10-01 13:52:21.062339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:11.109 [2024-10-01 13:52:21.062409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.109 [2024-10-01 13:52:21.062430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:11.109 [2024-10-01 13:52:21.062444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.109 [2024-10-01 13:52:21.064814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.109 [2024-10-01 13:52:21.064974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:11.109 spare 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.109 [2024-10-01 13:52:21.074421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.109 [2024-10-01 13:52:21.076437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.109 [2024-10-01 13:52:21.076506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.109 [2024-10-01 13:52:21.076677] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:11.109 [2024-10-01 13:52:21.076689] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:11.109 [2024-10-01 13:52:21.076951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:11.109 [2024-10-01 13:52:21.082665] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:11.109 [2024-10-01 13:52:21.082692] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:11.109 [2024-10-01 13:52:21.082870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.109 "name": "raid_bdev1", 00:18:11.109 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:11.109 "strip_size_kb": 64, 00:18:11.109 "state": "online", 00:18:11.109 "raid_level": "raid5f", 00:18:11.109 "superblock": true, 00:18:11.109 "num_base_bdevs": 3, 00:18:11.109 "num_base_bdevs_discovered": 3, 00:18:11.109 "num_base_bdevs_operational": 3, 00:18:11.109 "base_bdevs_list": [ 00:18:11.109 { 00:18:11.109 "name": "BaseBdev1", 00:18:11.109 "uuid": "f7e2fe6c-cd1e-5a75-b7cb-d0db6f5a33d7", 00:18:11.109 "is_configured": true, 00:18:11.109 "data_offset": 2048, 00:18:11.109 "data_size": 63488 00:18:11.109 }, 00:18:11.109 { 00:18:11.109 "name": "BaseBdev2", 00:18:11.109 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:11.109 "is_configured": true, 00:18:11.109 "data_offset": 2048, 00:18:11.109 "data_size": 63488 00:18:11.109 }, 00:18:11.109 { 00:18:11.109 "name": "BaseBdev3", 00:18:11.109 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:11.109 "is_configured": true, 00:18:11.109 "data_offset": 2048, 00:18:11.109 "data_size": 63488 00:18:11.109 } 00:18:11.109 ] 00:18:11.109 }' 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.109 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.368 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.368 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.368 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:11.368 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.368 [2024-10-01 13:52:21.520858] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.368 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:11.628 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:11.888 [2024-10-01 13:52:21.824327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:11.888 /dev/nbd0 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.888 1+0 records in 00:18:11.888 1+0 records out 00:18:11.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449291 s, 9.1 MB/s 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:11.888 13:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:12.455 496+0 records in 00:18:12.455 496+0 records out 00:18:12.455 65011712 bytes (65 MB, 62 MiB) copied, 0.462071 s, 141 MB/s 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:12.455 [2024-10-01 13:52:22.622368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.455 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.456 [2024-10-01 13:52:22.642636] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.715 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.715 "name": "raid_bdev1", 00:18:12.715 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:12.715 "strip_size_kb": 64, 00:18:12.716 "state": "online", 00:18:12.716 "raid_level": "raid5f", 00:18:12.716 "superblock": true, 00:18:12.716 "num_base_bdevs": 3, 00:18:12.716 "num_base_bdevs_discovered": 2, 00:18:12.716 "num_base_bdevs_operational": 2, 00:18:12.716 "base_bdevs_list": [ 00:18:12.716 { 00:18:12.716 "name": null, 00:18:12.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.716 "is_configured": false, 00:18:12.716 "data_offset": 0, 00:18:12.716 "data_size": 63488 00:18:12.716 }, 00:18:12.716 { 00:18:12.716 "name": "BaseBdev2", 00:18:12.716 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:12.716 "is_configured": true, 00:18:12.716 "data_offset": 2048, 00:18:12.716 "data_size": 63488 00:18:12.716 }, 00:18:12.716 { 00:18:12.716 "name": "BaseBdev3", 00:18:12.716 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:12.716 "is_configured": true, 00:18:12.716 "data_offset": 2048, 00:18:12.716 "data_size": 63488 00:18:12.716 } 00:18:12.716 ] 00:18:12.716 }' 00:18:12.716 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.716 13:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.974 13:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.974 13:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.974 13:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.974 [2024-10-01 13:52:23.058152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.974 [2024-10-01 13:52:23.077486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:12.974 13:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.974 13:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:12.974 [2024-10-01 13:52:23.086744] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.912 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.172 "name": "raid_bdev1", 00:18:14.172 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:14.172 "strip_size_kb": 64, 00:18:14.172 "state": "online", 00:18:14.172 "raid_level": "raid5f", 00:18:14.172 "superblock": true, 00:18:14.172 "num_base_bdevs": 3, 00:18:14.172 "num_base_bdevs_discovered": 3, 00:18:14.172 "num_base_bdevs_operational": 3, 00:18:14.172 "process": { 00:18:14.172 "type": "rebuild", 00:18:14.172 "target": "spare", 00:18:14.172 "progress": { 00:18:14.172 "blocks": 20480, 00:18:14.172 "percent": 16 00:18:14.172 } 00:18:14.172 }, 00:18:14.172 "base_bdevs_list": [ 00:18:14.172 { 00:18:14.172 "name": "spare", 00:18:14.172 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:14.172 "is_configured": true, 00:18:14.172 "data_offset": 2048, 00:18:14.172 "data_size": 63488 00:18:14.172 }, 00:18:14.172 { 00:18:14.172 "name": "BaseBdev2", 00:18:14.172 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:14.172 "is_configured": true, 00:18:14.172 "data_offset": 2048, 00:18:14.172 "data_size": 63488 00:18:14.172 }, 00:18:14.172 { 00:18:14.172 "name": "BaseBdev3", 00:18:14.172 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:14.172 "is_configured": true, 00:18:14.172 "data_offset": 2048, 00:18:14.172 "data_size": 63488 00:18:14.172 } 00:18:14.172 ] 00:18:14.172 }' 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.172 [2024-10-01 13:52:24.242061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.172 [2024-10-01 13:52:24.298037] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:14.172 [2024-10-01 13:52:24.298382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.172 [2024-10-01 13:52:24.298523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.172 [2024-10-01 13:52:24.298571] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.172 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.431 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.431 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.431 "name": "raid_bdev1", 00:18:14.431 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:14.431 "strip_size_kb": 64, 00:18:14.431 "state": "online", 00:18:14.431 "raid_level": "raid5f", 00:18:14.431 "superblock": true, 00:18:14.431 "num_base_bdevs": 3, 00:18:14.431 "num_base_bdevs_discovered": 2, 00:18:14.431 "num_base_bdevs_operational": 2, 00:18:14.431 "base_bdevs_list": [ 00:18:14.431 { 00:18:14.431 "name": null, 00:18:14.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.431 "is_configured": false, 00:18:14.431 "data_offset": 0, 00:18:14.431 "data_size": 63488 00:18:14.431 }, 00:18:14.431 { 00:18:14.431 "name": "BaseBdev2", 00:18:14.431 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:14.431 "is_configured": true, 00:18:14.431 "data_offset": 2048, 00:18:14.431 "data_size": 63488 00:18:14.431 }, 00:18:14.431 { 00:18:14.431 "name": "BaseBdev3", 00:18:14.431 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:14.431 "is_configured": true, 00:18:14.431 "data_offset": 2048, 00:18:14.431 "data_size": 63488 00:18:14.431 } 00:18:14.431 ] 00:18:14.431 }' 00:18:14.431 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.431 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.690 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.690 "name": "raid_bdev1", 00:18:14.690 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:14.690 "strip_size_kb": 64, 00:18:14.690 "state": "online", 00:18:14.690 "raid_level": "raid5f", 00:18:14.690 "superblock": true, 00:18:14.690 "num_base_bdevs": 3, 00:18:14.690 "num_base_bdevs_discovered": 2, 00:18:14.690 "num_base_bdevs_operational": 2, 00:18:14.690 "base_bdevs_list": [ 00:18:14.691 { 00:18:14.691 "name": null, 00:18:14.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.691 "is_configured": false, 00:18:14.691 "data_offset": 0, 00:18:14.691 "data_size": 63488 00:18:14.691 }, 00:18:14.691 { 00:18:14.691 "name": "BaseBdev2", 00:18:14.691 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:14.691 "is_configured": true, 00:18:14.691 "data_offset": 2048, 00:18:14.691 "data_size": 63488 00:18:14.691 }, 00:18:14.691 { 00:18:14.691 "name": "BaseBdev3", 00:18:14.691 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:14.691 "is_configured": true, 00:18:14.691 "data_offset": 2048, 00:18:14.691 "data_size": 63488 00:18:14.691 } 00:18:14.691 ] 00:18:14.691 }' 00:18:14.691 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.691 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.691 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.950 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.950 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.950 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.950 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.950 [2024-10-01 13:52:24.900890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.950 [2024-10-01 13:52:24.916804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:14.950 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.950 13:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:14.950 [2024-10-01 13:52:24.925060] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.915 "name": "raid_bdev1", 00:18:15.915 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:15.915 "strip_size_kb": 64, 00:18:15.915 "state": "online", 00:18:15.915 "raid_level": "raid5f", 00:18:15.915 "superblock": true, 00:18:15.915 "num_base_bdevs": 3, 00:18:15.915 "num_base_bdevs_discovered": 3, 00:18:15.915 "num_base_bdevs_operational": 3, 00:18:15.915 "process": { 00:18:15.915 "type": "rebuild", 00:18:15.915 "target": "spare", 00:18:15.915 "progress": { 00:18:15.915 "blocks": 18432, 00:18:15.915 "percent": 14 00:18:15.915 } 00:18:15.915 }, 00:18:15.915 "base_bdevs_list": [ 00:18:15.915 { 00:18:15.915 "name": "spare", 00:18:15.915 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:15.915 "is_configured": true, 00:18:15.915 "data_offset": 2048, 00:18:15.915 "data_size": 63488 00:18:15.915 }, 00:18:15.915 { 00:18:15.915 "name": "BaseBdev2", 00:18:15.915 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:15.915 "is_configured": true, 00:18:15.915 "data_offset": 2048, 00:18:15.915 "data_size": 63488 00:18:15.915 }, 00:18:15.915 { 00:18:15.915 "name": "BaseBdev3", 00:18:15.915 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:15.915 "is_configured": true, 00:18:15.915 "data_offset": 2048, 00:18:15.915 "data_size": 63488 00:18:15.915 } 00:18:15.915 ] 00:18:15.915 }' 00:18:15.915 13:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:15.915 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:15.915 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=581 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.916 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.916 "name": "raid_bdev1", 00:18:15.916 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:15.916 "strip_size_kb": 64, 00:18:15.916 "state": "online", 00:18:15.916 "raid_level": "raid5f", 00:18:15.916 "superblock": true, 00:18:15.916 "num_base_bdevs": 3, 00:18:15.916 "num_base_bdevs_discovered": 3, 00:18:15.916 "num_base_bdevs_operational": 3, 00:18:15.916 "process": { 00:18:15.916 "type": "rebuild", 00:18:15.916 "target": "spare", 00:18:15.916 "progress": { 00:18:15.916 "blocks": 22528, 00:18:15.916 "percent": 17 00:18:15.916 } 00:18:15.916 }, 00:18:15.916 "base_bdevs_list": [ 00:18:15.916 { 00:18:15.916 "name": "spare", 00:18:15.916 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:15.916 "is_configured": true, 00:18:15.916 "data_offset": 2048, 00:18:15.916 "data_size": 63488 00:18:15.916 }, 00:18:15.916 { 00:18:15.916 "name": "BaseBdev2", 00:18:15.916 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:15.916 "is_configured": true, 00:18:15.916 "data_offset": 2048, 00:18:15.916 "data_size": 63488 00:18:15.916 }, 00:18:15.916 { 00:18:15.916 "name": "BaseBdev3", 00:18:15.916 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:15.916 "is_configured": true, 00:18:15.916 "data_offset": 2048, 00:18:15.916 "data_size": 63488 00:18:15.916 } 00:18:15.916 ] 00:18:15.916 }' 00:18:16.175 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.175 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.175 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.175 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.175 13:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.112 "name": "raid_bdev1", 00:18:17.112 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:17.112 "strip_size_kb": 64, 00:18:17.112 "state": "online", 00:18:17.112 "raid_level": "raid5f", 00:18:17.112 "superblock": true, 00:18:17.112 "num_base_bdevs": 3, 00:18:17.112 "num_base_bdevs_discovered": 3, 00:18:17.112 "num_base_bdevs_operational": 3, 00:18:17.112 "process": { 00:18:17.112 "type": "rebuild", 00:18:17.112 "target": "spare", 00:18:17.112 "progress": { 00:18:17.112 "blocks": 45056, 00:18:17.112 "percent": 35 00:18:17.112 } 00:18:17.112 }, 00:18:17.112 "base_bdevs_list": [ 00:18:17.112 { 00:18:17.112 "name": "spare", 00:18:17.112 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:17.112 "is_configured": true, 00:18:17.112 "data_offset": 2048, 00:18:17.112 "data_size": 63488 00:18:17.112 }, 00:18:17.112 { 00:18:17.112 "name": "BaseBdev2", 00:18:17.112 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:17.112 "is_configured": true, 00:18:17.112 "data_offset": 2048, 00:18:17.112 "data_size": 63488 00:18:17.112 }, 00:18:17.112 { 00:18:17.112 "name": "BaseBdev3", 00:18:17.112 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:17.112 "is_configured": true, 00:18:17.112 "data_offset": 2048, 00:18:17.112 "data_size": 63488 00:18:17.112 } 00:18:17.112 ] 00:18:17.112 }' 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.112 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.370 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.370 13:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.306 "name": "raid_bdev1", 00:18:18.306 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:18.306 "strip_size_kb": 64, 00:18:18.306 "state": "online", 00:18:18.306 "raid_level": "raid5f", 00:18:18.306 "superblock": true, 00:18:18.306 "num_base_bdevs": 3, 00:18:18.306 "num_base_bdevs_discovered": 3, 00:18:18.306 "num_base_bdevs_operational": 3, 00:18:18.306 "process": { 00:18:18.306 "type": "rebuild", 00:18:18.306 "target": "spare", 00:18:18.306 "progress": { 00:18:18.306 "blocks": 67584, 00:18:18.306 "percent": 53 00:18:18.306 } 00:18:18.306 }, 00:18:18.306 "base_bdevs_list": [ 00:18:18.306 { 00:18:18.306 "name": "spare", 00:18:18.306 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:18.306 "is_configured": true, 00:18:18.306 "data_offset": 2048, 00:18:18.306 "data_size": 63488 00:18:18.306 }, 00:18:18.306 { 00:18:18.306 "name": "BaseBdev2", 00:18:18.306 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:18.306 "is_configured": true, 00:18:18.306 "data_offset": 2048, 00:18:18.306 "data_size": 63488 00:18:18.306 }, 00:18:18.306 { 00:18:18.306 "name": "BaseBdev3", 00:18:18.306 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:18.306 "is_configured": true, 00:18:18.306 "data_offset": 2048, 00:18:18.306 "data_size": 63488 00:18:18.306 } 00:18:18.306 ] 00:18:18.306 }' 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.306 13:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.685 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.685 "name": "raid_bdev1", 00:18:19.685 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:19.685 "strip_size_kb": 64, 00:18:19.685 "state": "online", 00:18:19.685 "raid_level": "raid5f", 00:18:19.685 "superblock": true, 00:18:19.685 "num_base_bdevs": 3, 00:18:19.685 "num_base_bdevs_discovered": 3, 00:18:19.685 "num_base_bdevs_operational": 3, 00:18:19.685 "process": { 00:18:19.685 "type": "rebuild", 00:18:19.685 "target": "spare", 00:18:19.685 "progress": { 00:18:19.685 "blocks": 92160, 00:18:19.685 "percent": 72 00:18:19.685 } 00:18:19.686 }, 00:18:19.686 "base_bdevs_list": [ 00:18:19.686 { 00:18:19.686 "name": "spare", 00:18:19.686 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:19.686 "is_configured": true, 00:18:19.686 "data_offset": 2048, 00:18:19.686 "data_size": 63488 00:18:19.686 }, 00:18:19.686 { 00:18:19.686 "name": "BaseBdev2", 00:18:19.686 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:19.686 "is_configured": true, 00:18:19.686 "data_offset": 2048, 00:18:19.686 "data_size": 63488 00:18:19.686 }, 00:18:19.686 { 00:18:19.686 "name": "BaseBdev3", 00:18:19.686 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:19.686 "is_configured": true, 00:18:19.686 "data_offset": 2048, 00:18:19.686 "data_size": 63488 00:18:19.686 } 00:18:19.686 ] 00:18:19.686 }' 00:18:19.686 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.686 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.686 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.686 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.686 13:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.622 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.622 "name": "raid_bdev1", 00:18:20.622 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:20.622 "strip_size_kb": 64, 00:18:20.622 "state": "online", 00:18:20.622 "raid_level": "raid5f", 00:18:20.622 "superblock": true, 00:18:20.622 "num_base_bdevs": 3, 00:18:20.622 "num_base_bdevs_discovered": 3, 00:18:20.622 "num_base_bdevs_operational": 3, 00:18:20.622 "process": { 00:18:20.622 "type": "rebuild", 00:18:20.622 "target": "spare", 00:18:20.622 "progress": { 00:18:20.622 "blocks": 114688, 00:18:20.623 "percent": 90 00:18:20.623 } 00:18:20.623 }, 00:18:20.623 "base_bdevs_list": [ 00:18:20.623 { 00:18:20.623 "name": "spare", 00:18:20.623 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:20.623 "is_configured": true, 00:18:20.623 "data_offset": 2048, 00:18:20.623 "data_size": 63488 00:18:20.623 }, 00:18:20.623 { 00:18:20.623 "name": "BaseBdev2", 00:18:20.623 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:20.623 "is_configured": true, 00:18:20.623 "data_offset": 2048, 00:18:20.623 "data_size": 63488 00:18:20.623 }, 00:18:20.623 { 00:18:20.623 "name": "BaseBdev3", 00:18:20.623 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:20.623 "is_configured": true, 00:18:20.623 "data_offset": 2048, 00:18:20.623 "data_size": 63488 00:18:20.623 } 00:18:20.623 ] 00:18:20.623 }' 00:18:20.623 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.623 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.623 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.623 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.623 13:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.191 [2024-10-01 13:52:31.183089] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:21.191 [2024-10-01 13:52:31.183201] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:21.191 [2024-10-01 13:52:31.183383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.759 "name": "raid_bdev1", 00:18:21.759 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:21.759 "strip_size_kb": 64, 00:18:21.759 "state": "online", 00:18:21.759 "raid_level": "raid5f", 00:18:21.759 "superblock": true, 00:18:21.759 "num_base_bdevs": 3, 00:18:21.759 "num_base_bdevs_discovered": 3, 00:18:21.759 "num_base_bdevs_operational": 3, 00:18:21.759 "base_bdevs_list": [ 00:18:21.759 { 00:18:21.759 "name": "spare", 00:18:21.759 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:21.759 "is_configured": true, 00:18:21.759 "data_offset": 2048, 00:18:21.759 "data_size": 63488 00:18:21.759 }, 00:18:21.759 { 00:18:21.759 "name": "BaseBdev2", 00:18:21.759 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:21.759 "is_configured": true, 00:18:21.759 "data_offset": 2048, 00:18:21.759 "data_size": 63488 00:18:21.759 }, 00:18:21.759 { 00:18:21.759 "name": "BaseBdev3", 00:18:21.759 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:21.759 "is_configured": true, 00:18:21.759 "data_offset": 2048, 00:18:21.759 "data_size": 63488 00:18:21.759 } 00:18:21.759 ] 00:18:21.759 }' 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.759 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.760 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.760 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.019 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.019 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.019 "name": "raid_bdev1", 00:18:22.019 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:22.019 "strip_size_kb": 64, 00:18:22.019 "state": "online", 00:18:22.019 "raid_level": "raid5f", 00:18:22.019 "superblock": true, 00:18:22.019 "num_base_bdevs": 3, 00:18:22.019 "num_base_bdevs_discovered": 3, 00:18:22.019 "num_base_bdevs_operational": 3, 00:18:22.019 "base_bdevs_list": [ 00:18:22.019 { 00:18:22.019 "name": "spare", 00:18:22.019 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:22.019 "is_configured": true, 00:18:22.019 "data_offset": 2048, 00:18:22.019 "data_size": 63488 00:18:22.019 }, 00:18:22.019 { 00:18:22.019 "name": "BaseBdev2", 00:18:22.019 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:22.019 "is_configured": true, 00:18:22.019 "data_offset": 2048, 00:18:22.019 "data_size": 63488 00:18:22.019 }, 00:18:22.019 { 00:18:22.019 "name": "BaseBdev3", 00:18:22.019 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:22.019 "is_configured": true, 00:18:22.019 "data_offset": 2048, 00:18:22.019 "data_size": 63488 00:18:22.019 } 00:18:22.019 ] 00:18:22.019 }' 00:18:22.019 13:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.019 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.019 "name": "raid_bdev1", 00:18:22.019 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:22.019 "strip_size_kb": 64, 00:18:22.019 "state": "online", 00:18:22.019 "raid_level": "raid5f", 00:18:22.019 "superblock": true, 00:18:22.019 "num_base_bdevs": 3, 00:18:22.019 "num_base_bdevs_discovered": 3, 00:18:22.019 "num_base_bdevs_operational": 3, 00:18:22.019 "base_bdevs_list": [ 00:18:22.019 { 00:18:22.019 "name": "spare", 00:18:22.019 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:22.019 "is_configured": true, 00:18:22.019 "data_offset": 2048, 00:18:22.019 "data_size": 63488 00:18:22.019 }, 00:18:22.019 { 00:18:22.019 "name": "BaseBdev2", 00:18:22.020 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:22.020 "is_configured": true, 00:18:22.020 "data_offset": 2048, 00:18:22.020 "data_size": 63488 00:18:22.020 }, 00:18:22.020 { 00:18:22.020 "name": "BaseBdev3", 00:18:22.020 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:22.020 "is_configured": true, 00:18:22.020 "data_offset": 2048, 00:18:22.020 "data_size": 63488 00:18:22.020 } 00:18:22.020 ] 00:18:22.020 }' 00:18:22.020 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.020 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.587 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.587 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.587 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.587 [2024-10-01 13:52:32.487644] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.587 [2024-10-01 13:52:32.487680] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.587 [2024-10-01 13:52:32.487796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.587 [2024-10-01 13:52:32.487895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.587 [2024-10-01 13:52:32.487916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:22.588 /dev/nbd0 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:22.588 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.588 1+0 records in 00:18:22.588 1+0 records out 00:18:22.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366004 s, 11.2 MB/s 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.846 13:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:22.846 /dev/nbd1 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.105 1+0 records in 00:18:23.105 1+0 records out 00:18:23.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306795 s, 13.4 MB/s 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.105 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.364 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.623 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.623 [2024-10-01 13:52:33.776681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.623 [2024-10-01 13:52:33.776751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.623 [2024-10-01 13:52:33.776777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:23.623 [2024-10-01 13:52:33.776794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.623 [2024-10-01 13:52:33.779735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.623 [2024-10-01 13:52:33.779788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.623 [2024-10-01 13:52:33.779901] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:23.623 [2024-10-01 13:52:33.779980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.623 [2024-10-01 13:52:33.780132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.624 [2024-10-01 13:52:33.780264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.624 spare 00:18:23.624 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.624 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:23.624 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.624 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.882 [2024-10-01 13:52:33.880214] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:23.882 [2024-10-01 13:52:33.880279] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:23.882 [2024-10-01 13:52:33.880710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:18:23.882 [2024-10-01 13:52:33.886869] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:23.882 [2024-10-01 13:52:33.887027] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:23.882 [2024-10-01 13:52:33.887364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.882 "name": "raid_bdev1", 00:18:23.882 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:23.882 "strip_size_kb": 64, 00:18:23.882 "state": "online", 00:18:23.882 "raid_level": "raid5f", 00:18:23.882 "superblock": true, 00:18:23.882 "num_base_bdevs": 3, 00:18:23.882 "num_base_bdevs_discovered": 3, 00:18:23.882 "num_base_bdevs_operational": 3, 00:18:23.882 "base_bdevs_list": [ 00:18:23.882 { 00:18:23.882 "name": "spare", 00:18:23.882 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:23.882 "is_configured": true, 00:18:23.882 "data_offset": 2048, 00:18:23.882 "data_size": 63488 00:18:23.882 }, 00:18:23.882 { 00:18:23.882 "name": "BaseBdev2", 00:18:23.882 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:23.882 "is_configured": true, 00:18:23.882 "data_offset": 2048, 00:18:23.882 "data_size": 63488 00:18:23.882 }, 00:18:23.882 { 00:18:23.882 "name": "BaseBdev3", 00:18:23.882 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:23.882 "is_configured": true, 00:18:23.882 "data_offset": 2048, 00:18:23.882 "data_size": 63488 00:18:23.882 } 00:18:23.882 ] 00:18:23.882 }' 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.882 13:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.141 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.399 "name": "raid_bdev1", 00:18:24.399 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:24.399 "strip_size_kb": 64, 00:18:24.399 "state": "online", 00:18:24.399 "raid_level": "raid5f", 00:18:24.399 "superblock": true, 00:18:24.399 "num_base_bdevs": 3, 00:18:24.399 "num_base_bdevs_discovered": 3, 00:18:24.399 "num_base_bdevs_operational": 3, 00:18:24.399 "base_bdevs_list": [ 00:18:24.399 { 00:18:24.399 "name": "spare", 00:18:24.399 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:24.399 "is_configured": true, 00:18:24.399 "data_offset": 2048, 00:18:24.399 "data_size": 63488 00:18:24.399 }, 00:18:24.399 { 00:18:24.399 "name": "BaseBdev2", 00:18:24.399 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:24.399 "is_configured": true, 00:18:24.399 "data_offset": 2048, 00:18:24.399 "data_size": 63488 00:18:24.399 }, 00:18:24.399 { 00:18:24.399 "name": "BaseBdev3", 00:18:24.399 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:24.399 "is_configured": true, 00:18:24.399 "data_offset": 2048, 00:18:24.399 "data_size": 63488 00:18:24.399 } 00:18:24.399 ] 00:18:24.399 }' 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.399 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.399 [2024-10-01 13:52:34.450069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.400 "name": "raid_bdev1", 00:18:24.400 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:24.400 "strip_size_kb": 64, 00:18:24.400 "state": "online", 00:18:24.400 "raid_level": "raid5f", 00:18:24.400 "superblock": true, 00:18:24.400 "num_base_bdevs": 3, 00:18:24.400 "num_base_bdevs_discovered": 2, 00:18:24.400 "num_base_bdevs_operational": 2, 00:18:24.400 "base_bdevs_list": [ 00:18:24.400 { 00:18:24.400 "name": null, 00:18:24.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.400 "is_configured": false, 00:18:24.400 "data_offset": 0, 00:18:24.400 "data_size": 63488 00:18:24.400 }, 00:18:24.400 { 00:18:24.400 "name": "BaseBdev2", 00:18:24.400 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:24.400 "is_configured": true, 00:18:24.400 "data_offset": 2048, 00:18:24.400 "data_size": 63488 00:18:24.400 }, 00:18:24.400 { 00:18:24.400 "name": "BaseBdev3", 00:18:24.400 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:24.400 "is_configured": true, 00:18:24.400 "data_offset": 2048, 00:18:24.400 "data_size": 63488 00:18:24.400 } 00:18:24.400 ] 00:18:24.400 }' 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.400 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.965 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.965 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.965 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.965 [2024-10-01 13:52:34.861516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.965 [2024-10-01 13:52:34.861712] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.966 [2024-10-01 13:52:34.861734] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:24.966 [2024-10-01 13:52:34.861778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.966 [2024-10-01 13:52:34.878817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:24.966 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.966 13:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:24.966 [2024-10-01 13:52:34.887504] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.901 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.901 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.901 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.901 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.902 "name": "raid_bdev1", 00:18:25.902 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:25.902 "strip_size_kb": 64, 00:18:25.902 "state": "online", 00:18:25.902 "raid_level": "raid5f", 00:18:25.902 "superblock": true, 00:18:25.902 "num_base_bdevs": 3, 00:18:25.902 "num_base_bdevs_discovered": 3, 00:18:25.902 "num_base_bdevs_operational": 3, 00:18:25.902 "process": { 00:18:25.902 "type": "rebuild", 00:18:25.902 "target": "spare", 00:18:25.902 "progress": { 00:18:25.902 "blocks": 20480, 00:18:25.902 "percent": 16 00:18:25.902 } 00:18:25.902 }, 00:18:25.902 "base_bdevs_list": [ 00:18:25.902 { 00:18:25.902 "name": "spare", 00:18:25.902 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:25.902 "is_configured": true, 00:18:25.902 "data_offset": 2048, 00:18:25.902 "data_size": 63488 00:18:25.902 }, 00:18:25.902 { 00:18:25.902 "name": "BaseBdev2", 00:18:25.902 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:25.902 "is_configured": true, 00:18:25.902 "data_offset": 2048, 00:18:25.902 "data_size": 63488 00:18:25.902 }, 00:18:25.902 { 00:18:25.902 "name": "BaseBdev3", 00:18:25.902 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:25.902 "is_configured": true, 00:18:25.902 "data_offset": 2048, 00:18:25.902 "data_size": 63488 00:18:25.902 } 00:18:25.902 ] 00:18:25.902 }' 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.902 13:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.902 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.902 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:25.902 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.902 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.902 [2024-10-01 13:52:36.047699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.161 [2024-10-01 13:52:36.098348] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.161 [2024-10-01 13:52:36.098465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.161 [2024-10-01 13:52:36.098486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.161 [2024-10-01 13:52:36.098499] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.161 "name": "raid_bdev1", 00:18:26.161 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:26.161 "strip_size_kb": 64, 00:18:26.161 "state": "online", 00:18:26.161 "raid_level": "raid5f", 00:18:26.161 "superblock": true, 00:18:26.161 "num_base_bdevs": 3, 00:18:26.161 "num_base_bdevs_discovered": 2, 00:18:26.161 "num_base_bdevs_operational": 2, 00:18:26.161 "base_bdevs_list": [ 00:18:26.161 { 00:18:26.161 "name": null, 00:18:26.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.161 "is_configured": false, 00:18:26.161 "data_offset": 0, 00:18:26.161 "data_size": 63488 00:18:26.161 }, 00:18:26.161 { 00:18:26.161 "name": "BaseBdev2", 00:18:26.161 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:26.161 "is_configured": true, 00:18:26.161 "data_offset": 2048, 00:18:26.161 "data_size": 63488 00:18:26.161 }, 00:18:26.161 { 00:18:26.161 "name": "BaseBdev3", 00:18:26.161 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:26.161 "is_configured": true, 00:18:26.161 "data_offset": 2048, 00:18:26.161 "data_size": 63488 00:18:26.161 } 00:18:26.161 ] 00:18:26.161 }' 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.161 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.420 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.420 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.420 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.420 [2024-10-01 13:52:36.555071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.420 [2024-10-01 13:52:36.555147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.420 [2024-10-01 13:52:36.555176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:26.420 [2024-10-01 13:52:36.555196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.420 [2024-10-01 13:52:36.555778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.420 [2024-10-01 13:52:36.555814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.420 [2024-10-01 13:52:36.555928] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:26.420 [2024-10-01 13:52:36.555948] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.420 [2024-10-01 13:52:36.555962] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:26.420 [2024-10-01 13:52:36.555991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.420 [2024-10-01 13:52:36.572959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:26.420 spare 00:18:26.420 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.420 13:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:26.420 [2024-10-01 13:52:36.581788] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.797 "name": "raid_bdev1", 00:18:27.797 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:27.797 "strip_size_kb": 64, 00:18:27.797 "state": "online", 00:18:27.797 "raid_level": "raid5f", 00:18:27.797 "superblock": true, 00:18:27.797 "num_base_bdevs": 3, 00:18:27.797 "num_base_bdevs_discovered": 3, 00:18:27.797 "num_base_bdevs_operational": 3, 00:18:27.797 "process": { 00:18:27.797 "type": "rebuild", 00:18:27.797 "target": "spare", 00:18:27.797 "progress": { 00:18:27.797 "blocks": 20480, 00:18:27.797 "percent": 16 00:18:27.797 } 00:18:27.797 }, 00:18:27.797 "base_bdevs_list": [ 00:18:27.797 { 00:18:27.797 "name": "spare", 00:18:27.797 "uuid": "9580eb89-f2d4-5432-97b5-3cc241995d20", 00:18:27.797 "is_configured": true, 00:18:27.797 "data_offset": 2048, 00:18:27.797 "data_size": 63488 00:18:27.797 }, 00:18:27.797 { 00:18:27.797 "name": "BaseBdev2", 00:18:27.797 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:27.797 "is_configured": true, 00:18:27.797 "data_offset": 2048, 00:18:27.797 "data_size": 63488 00:18:27.797 }, 00:18:27.797 { 00:18:27.797 "name": "BaseBdev3", 00:18:27.797 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:27.797 "is_configured": true, 00:18:27.797 "data_offset": 2048, 00:18:27.797 "data_size": 63488 00:18:27.797 } 00:18:27.797 ] 00:18:27.797 }' 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.797 [2024-10-01 13:52:37.729634] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.797 [2024-10-01 13:52:37.792915] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:27.797 [2024-10-01 13:52:37.793194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.797 [2024-10-01 13:52:37.793228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.797 [2024-10-01 13:52:37.793241] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.797 "name": "raid_bdev1", 00:18:27.797 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:27.797 "strip_size_kb": 64, 00:18:27.797 "state": "online", 00:18:27.797 "raid_level": "raid5f", 00:18:27.797 "superblock": true, 00:18:27.797 "num_base_bdevs": 3, 00:18:27.797 "num_base_bdevs_discovered": 2, 00:18:27.797 "num_base_bdevs_operational": 2, 00:18:27.797 "base_bdevs_list": [ 00:18:27.797 { 00:18:27.797 "name": null, 00:18:27.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.797 "is_configured": false, 00:18:27.797 "data_offset": 0, 00:18:27.797 "data_size": 63488 00:18:27.797 }, 00:18:27.797 { 00:18:27.797 "name": "BaseBdev2", 00:18:27.797 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:27.797 "is_configured": true, 00:18:27.797 "data_offset": 2048, 00:18:27.797 "data_size": 63488 00:18:27.797 }, 00:18:27.797 { 00:18:27.797 "name": "BaseBdev3", 00:18:27.797 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:27.797 "is_configured": true, 00:18:27.797 "data_offset": 2048, 00:18:27.797 "data_size": 63488 00:18:27.797 } 00:18:27.797 ] 00:18:27.797 }' 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.797 13:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.363 "name": "raid_bdev1", 00:18:28.363 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:28.363 "strip_size_kb": 64, 00:18:28.363 "state": "online", 00:18:28.363 "raid_level": "raid5f", 00:18:28.363 "superblock": true, 00:18:28.363 "num_base_bdevs": 3, 00:18:28.363 "num_base_bdevs_discovered": 2, 00:18:28.363 "num_base_bdevs_operational": 2, 00:18:28.363 "base_bdevs_list": [ 00:18:28.363 { 00:18:28.363 "name": null, 00:18:28.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.363 "is_configured": false, 00:18:28.363 "data_offset": 0, 00:18:28.363 "data_size": 63488 00:18:28.363 }, 00:18:28.363 { 00:18:28.363 "name": "BaseBdev2", 00:18:28.363 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:28.363 "is_configured": true, 00:18:28.363 "data_offset": 2048, 00:18:28.363 "data_size": 63488 00:18:28.363 }, 00:18:28.363 { 00:18:28.363 "name": "BaseBdev3", 00:18:28.363 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:28.363 "is_configured": true, 00:18:28.363 "data_offset": 2048, 00:18:28.363 "data_size": 63488 00:18:28.363 } 00:18:28.363 ] 00:18:28.363 }' 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.363 [2024-10-01 13:52:38.436823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:28.363 [2024-10-01 13:52:38.436899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.363 [2024-10-01 13:52:38.436946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:28.363 [2024-10-01 13:52:38.436960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.363 [2024-10-01 13:52:38.437477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.363 [2024-10-01 13:52:38.437501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.363 [2024-10-01 13:52:38.437598] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:28.363 [2024-10-01 13:52:38.437616] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.363 [2024-10-01 13:52:38.437632] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:28.363 [2024-10-01 13:52:38.437649] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:28.363 BaseBdev1 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.363 13:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.298 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.556 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.556 "name": "raid_bdev1", 00:18:29.556 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:29.556 "strip_size_kb": 64, 00:18:29.556 "state": "online", 00:18:29.556 "raid_level": "raid5f", 00:18:29.556 "superblock": true, 00:18:29.556 "num_base_bdevs": 3, 00:18:29.556 "num_base_bdevs_discovered": 2, 00:18:29.556 "num_base_bdevs_operational": 2, 00:18:29.556 "base_bdevs_list": [ 00:18:29.556 { 00:18:29.556 "name": null, 00:18:29.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.556 "is_configured": false, 00:18:29.556 "data_offset": 0, 00:18:29.556 "data_size": 63488 00:18:29.556 }, 00:18:29.556 { 00:18:29.556 "name": "BaseBdev2", 00:18:29.556 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:29.556 "is_configured": true, 00:18:29.556 "data_offset": 2048, 00:18:29.556 "data_size": 63488 00:18:29.556 }, 00:18:29.556 { 00:18:29.556 "name": "BaseBdev3", 00:18:29.556 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:29.556 "is_configured": true, 00:18:29.556 "data_offset": 2048, 00:18:29.556 "data_size": 63488 00:18:29.556 } 00:18:29.556 ] 00:18:29.556 }' 00:18:29.556 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.556 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.827 "name": "raid_bdev1", 00:18:29.827 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:29.827 "strip_size_kb": 64, 00:18:29.827 "state": "online", 00:18:29.827 "raid_level": "raid5f", 00:18:29.827 "superblock": true, 00:18:29.827 "num_base_bdevs": 3, 00:18:29.827 "num_base_bdevs_discovered": 2, 00:18:29.827 "num_base_bdevs_operational": 2, 00:18:29.827 "base_bdevs_list": [ 00:18:29.827 { 00:18:29.827 "name": null, 00:18:29.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.827 "is_configured": false, 00:18:29.827 "data_offset": 0, 00:18:29.827 "data_size": 63488 00:18:29.827 }, 00:18:29.827 { 00:18:29.827 "name": "BaseBdev2", 00:18:29.827 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:29.827 "is_configured": true, 00:18:29.827 "data_offset": 2048, 00:18:29.827 "data_size": 63488 00:18:29.827 }, 00:18:29.827 { 00:18:29.827 "name": "BaseBdev3", 00:18:29.827 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:29.827 "is_configured": true, 00:18:29.827 "data_offset": 2048, 00:18:29.827 "data_size": 63488 00:18:29.827 } 00:18:29.827 ] 00:18:29.827 }' 00:18:29.827 13:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.104 [2024-10-01 13:52:40.055688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.104 [2024-10-01 13:52:40.056053] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.104 [2024-10-01 13:52:40.056204] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:30.104 request: 00:18:30.104 { 00:18:30.104 "base_bdev": "BaseBdev1", 00:18:30.104 "raid_bdev": "raid_bdev1", 00:18:30.104 "method": "bdev_raid_add_base_bdev", 00:18:30.104 "req_id": 1 00:18:30.104 } 00:18:30.104 Got JSON-RPC error response 00:18:30.104 response: 00:18:30.104 { 00:18:30.104 "code": -22, 00:18:30.104 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:30.104 } 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.104 13:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.040 "name": "raid_bdev1", 00:18:31.040 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:31.040 "strip_size_kb": 64, 00:18:31.040 "state": "online", 00:18:31.040 "raid_level": "raid5f", 00:18:31.040 "superblock": true, 00:18:31.040 "num_base_bdevs": 3, 00:18:31.040 "num_base_bdevs_discovered": 2, 00:18:31.040 "num_base_bdevs_operational": 2, 00:18:31.040 "base_bdevs_list": [ 00:18:31.040 { 00:18:31.040 "name": null, 00:18:31.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.040 "is_configured": false, 00:18:31.040 "data_offset": 0, 00:18:31.040 "data_size": 63488 00:18:31.040 }, 00:18:31.040 { 00:18:31.040 "name": "BaseBdev2", 00:18:31.040 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:31.040 "is_configured": true, 00:18:31.040 "data_offset": 2048, 00:18:31.040 "data_size": 63488 00:18:31.040 }, 00:18:31.040 { 00:18:31.040 "name": "BaseBdev3", 00:18:31.040 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:31.040 "is_configured": true, 00:18:31.040 "data_offset": 2048, 00:18:31.040 "data_size": 63488 00:18:31.040 } 00:18:31.040 ] 00:18:31.040 }' 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.040 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.607 "name": "raid_bdev1", 00:18:31.607 "uuid": "3e089811-0047-4914-97ae-17db34c3e922", 00:18:31.607 "strip_size_kb": 64, 00:18:31.607 "state": "online", 00:18:31.607 "raid_level": "raid5f", 00:18:31.607 "superblock": true, 00:18:31.607 "num_base_bdevs": 3, 00:18:31.607 "num_base_bdevs_discovered": 2, 00:18:31.607 "num_base_bdevs_operational": 2, 00:18:31.607 "base_bdevs_list": [ 00:18:31.607 { 00:18:31.607 "name": null, 00:18:31.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.607 "is_configured": false, 00:18:31.607 "data_offset": 0, 00:18:31.607 "data_size": 63488 00:18:31.607 }, 00:18:31.607 { 00:18:31.607 "name": "BaseBdev2", 00:18:31.607 "uuid": "7f71daaa-f35b-5382-b31f-f335148256c4", 00:18:31.607 "is_configured": true, 00:18:31.607 "data_offset": 2048, 00:18:31.607 "data_size": 63488 00:18:31.607 }, 00:18:31.607 { 00:18:31.607 "name": "BaseBdev3", 00:18:31.607 "uuid": "dc055496-c33e-5176-932b-42fdace3fe36", 00:18:31.607 "is_configured": true, 00:18:31.607 "data_offset": 2048, 00:18:31.607 "data_size": 63488 00:18:31.607 } 00:18:31.607 ] 00:18:31.607 }' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82056 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82056 ']' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82056 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82056 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.607 killing process with pid 82056 00:18:31.607 Received shutdown signal, test time was about 60.000000 seconds 00:18:31.607 00:18:31.607 Latency(us) 00:18:31.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.607 =================================================================================================================== 00:18:31.607 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82056' 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82056 00:18:31.607 [2024-10-01 13:52:41.685233] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.607 13:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82056 00:18:31.607 [2024-10-01 13:52:41.685382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.607 [2024-10-01 13:52:41.685484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.607 [2024-10-01 13:52:41.685503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:32.175 [2024-10-01 13:52:42.103070] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.563 ************************************ 00:18:33.563 END TEST raid5f_rebuild_test_sb 00:18:33.563 ************************************ 00:18:33.563 13:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:33.563 00:18:33.563 real 0m23.624s 00:18:33.563 user 0m29.994s 00:18:33.563 sys 0m3.244s 00:18:33.563 13:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.563 13:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.563 13:52:43 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:33.563 13:52:43 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:33.563 13:52:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:33.563 13:52:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:33.563 13:52:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.563 ************************************ 00:18:33.563 START TEST raid5f_state_function_test 00:18:33.563 ************************************ 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:33.563 Process raid pid: 82813 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82813 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82813' 00:18:33.563 13:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82813 00:18:33.564 13:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82813 ']' 00:18:33.564 13:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.564 13:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.564 13:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.564 13:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.564 13:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.564 [2024-10-01 13:52:43.636012] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:33.564 [2024-10-01 13:52:43.636951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.822 [2024-10-01 13:52:43.813475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.082 [2024-10-01 13:52:44.041844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.082 [2024-10-01 13:52:44.270094] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.082 [2024-10-01 13:52:44.270314] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.340 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.340 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:34.340 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:34.340 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.341 [2024-10-01 13:52:44.516770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.341 [2024-10-01 13:52:44.516825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.341 [2024-10-01 13:52:44.516840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.341 [2024-10-01 13:52:44.516854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.341 [2024-10-01 13:52:44.516863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:34.341 [2024-10-01 13:52:44.516875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:34.341 [2024-10-01 13:52:44.516884] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:34.341 [2024-10-01 13:52:44.516898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.341 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.600 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.600 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.600 "name": "Existed_Raid", 00:18:34.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.600 "strip_size_kb": 64, 00:18:34.600 "state": "configuring", 00:18:34.600 "raid_level": "raid5f", 00:18:34.600 "superblock": false, 00:18:34.600 "num_base_bdevs": 4, 00:18:34.600 "num_base_bdevs_discovered": 0, 00:18:34.600 "num_base_bdevs_operational": 4, 00:18:34.600 "base_bdevs_list": [ 00:18:34.600 { 00:18:34.600 "name": "BaseBdev1", 00:18:34.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.600 "is_configured": false, 00:18:34.600 "data_offset": 0, 00:18:34.600 "data_size": 0 00:18:34.600 }, 00:18:34.600 { 00:18:34.600 "name": "BaseBdev2", 00:18:34.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.600 "is_configured": false, 00:18:34.600 "data_offset": 0, 00:18:34.600 "data_size": 0 00:18:34.600 }, 00:18:34.600 { 00:18:34.600 "name": "BaseBdev3", 00:18:34.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.600 "is_configured": false, 00:18:34.600 "data_offset": 0, 00:18:34.600 "data_size": 0 00:18:34.600 }, 00:18:34.600 { 00:18:34.600 "name": "BaseBdev4", 00:18:34.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.600 "is_configured": false, 00:18:34.600 "data_offset": 0, 00:18:34.600 "data_size": 0 00:18:34.600 } 00:18:34.600 ] 00:18:34.600 }' 00:18:34.600 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.600 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.858 [2024-10-01 13:52:44.968058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.858 [2024-10-01 13:52:44.968106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.858 [2024-10-01 13:52:44.980064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.858 [2024-10-01 13:52:44.980114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.858 [2024-10-01 13:52:44.980125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.858 [2024-10-01 13:52:44.980139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.858 [2024-10-01 13:52:44.980147] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:34.858 [2024-10-01 13:52:44.980159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:34.858 [2024-10-01 13:52:44.980167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:34.858 [2024-10-01 13:52:44.980180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:34.858 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.859 13:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:34.859 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.859 13:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.859 [2024-10-01 13:52:45.040157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.859 BaseBdev1 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.859 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.117 [ 00:18:35.117 { 00:18:35.117 "name": "BaseBdev1", 00:18:35.117 "aliases": [ 00:18:35.117 "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3" 00:18:35.117 ], 00:18:35.117 "product_name": "Malloc disk", 00:18:35.117 "block_size": 512, 00:18:35.117 "num_blocks": 65536, 00:18:35.117 "uuid": "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3", 00:18:35.117 "assigned_rate_limits": { 00:18:35.117 "rw_ios_per_sec": 0, 00:18:35.117 "rw_mbytes_per_sec": 0, 00:18:35.117 "r_mbytes_per_sec": 0, 00:18:35.117 "w_mbytes_per_sec": 0 00:18:35.117 }, 00:18:35.117 "claimed": true, 00:18:35.117 "claim_type": "exclusive_write", 00:18:35.117 "zoned": false, 00:18:35.117 "supported_io_types": { 00:18:35.117 "read": true, 00:18:35.117 "write": true, 00:18:35.117 "unmap": true, 00:18:35.117 "flush": true, 00:18:35.117 "reset": true, 00:18:35.117 "nvme_admin": false, 00:18:35.117 "nvme_io": false, 00:18:35.117 "nvme_io_md": false, 00:18:35.117 "write_zeroes": true, 00:18:35.117 "zcopy": true, 00:18:35.117 "get_zone_info": false, 00:18:35.117 "zone_management": false, 00:18:35.117 "zone_append": false, 00:18:35.117 "compare": false, 00:18:35.117 "compare_and_write": false, 00:18:35.117 "abort": true, 00:18:35.117 "seek_hole": false, 00:18:35.117 "seek_data": false, 00:18:35.117 "copy": true, 00:18:35.117 "nvme_iov_md": false 00:18:35.117 }, 00:18:35.117 "memory_domains": [ 00:18:35.117 { 00:18:35.117 "dma_device_id": "system", 00:18:35.117 "dma_device_type": 1 00:18:35.117 }, 00:18:35.117 { 00:18:35.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.117 "dma_device_type": 2 00:18:35.117 } 00:18:35.117 ], 00:18:35.117 "driver_specific": {} 00:18:35.117 } 00:18:35.117 ] 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.117 "name": "Existed_Raid", 00:18:35.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.117 "strip_size_kb": 64, 00:18:35.117 "state": "configuring", 00:18:35.117 "raid_level": "raid5f", 00:18:35.117 "superblock": false, 00:18:35.117 "num_base_bdevs": 4, 00:18:35.117 "num_base_bdevs_discovered": 1, 00:18:35.117 "num_base_bdevs_operational": 4, 00:18:35.117 "base_bdevs_list": [ 00:18:35.117 { 00:18:35.117 "name": "BaseBdev1", 00:18:35.117 "uuid": "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3", 00:18:35.117 "is_configured": true, 00:18:35.117 "data_offset": 0, 00:18:35.117 "data_size": 65536 00:18:35.117 }, 00:18:35.117 { 00:18:35.117 "name": "BaseBdev2", 00:18:35.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.117 "is_configured": false, 00:18:35.117 "data_offset": 0, 00:18:35.117 "data_size": 0 00:18:35.117 }, 00:18:35.117 { 00:18:35.117 "name": "BaseBdev3", 00:18:35.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.117 "is_configured": false, 00:18:35.117 "data_offset": 0, 00:18:35.117 "data_size": 0 00:18:35.117 }, 00:18:35.117 { 00:18:35.117 "name": "BaseBdev4", 00:18:35.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.117 "is_configured": false, 00:18:35.117 "data_offset": 0, 00:18:35.117 "data_size": 0 00:18:35.117 } 00:18:35.117 ] 00:18:35.117 }' 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.117 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.375 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:35.375 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.375 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.375 [2024-10-01 13:52:45.563639] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.375 [2024-10-01 13:52:45.563698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.633 [2024-10-01 13:52:45.575689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.633 [2024-10-01 13:52:45.578006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.633 [2024-10-01 13:52:45.578062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.633 [2024-10-01 13:52:45.578074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:35.633 [2024-10-01 13:52:45.578090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:35.633 [2024-10-01 13:52:45.578099] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:35.633 [2024-10-01 13:52:45.578112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.633 "name": "Existed_Raid", 00:18:35.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.633 "strip_size_kb": 64, 00:18:35.633 "state": "configuring", 00:18:35.633 "raid_level": "raid5f", 00:18:35.633 "superblock": false, 00:18:35.633 "num_base_bdevs": 4, 00:18:35.633 "num_base_bdevs_discovered": 1, 00:18:35.633 "num_base_bdevs_operational": 4, 00:18:35.633 "base_bdevs_list": [ 00:18:35.633 { 00:18:35.633 "name": "BaseBdev1", 00:18:35.633 "uuid": "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3", 00:18:35.633 "is_configured": true, 00:18:35.633 "data_offset": 0, 00:18:35.633 "data_size": 65536 00:18:35.633 }, 00:18:35.633 { 00:18:35.633 "name": "BaseBdev2", 00:18:35.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.633 "is_configured": false, 00:18:35.633 "data_offset": 0, 00:18:35.633 "data_size": 0 00:18:35.633 }, 00:18:35.633 { 00:18:35.633 "name": "BaseBdev3", 00:18:35.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.633 "is_configured": false, 00:18:35.633 "data_offset": 0, 00:18:35.633 "data_size": 0 00:18:35.633 }, 00:18:35.633 { 00:18:35.633 "name": "BaseBdev4", 00:18:35.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.633 "is_configured": false, 00:18:35.633 "data_offset": 0, 00:18:35.633 "data_size": 0 00:18:35.633 } 00:18:35.633 ] 00:18:35.633 }' 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.633 13:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.890 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:35.890 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.890 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.890 [2024-10-01 13:52:46.049392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.890 BaseBdev2 00:18:35.890 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.890 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.891 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.891 [ 00:18:35.891 { 00:18:35.891 "name": "BaseBdev2", 00:18:35.891 "aliases": [ 00:18:35.891 "949236f6-52fa-4d0c-9b08-b6d1133d631b" 00:18:35.891 ], 00:18:36.148 "product_name": "Malloc disk", 00:18:36.148 "block_size": 512, 00:18:36.148 "num_blocks": 65536, 00:18:36.148 "uuid": "949236f6-52fa-4d0c-9b08-b6d1133d631b", 00:18:36.148 "assigned_rate_limits": { 00:18:36.148 "rw_ios_per_sec": 0, 00:18:36.148 "rw_mbytes_per_sec": 0, 00:18:36.148 "r_mbytes_per_sec": 0, 00:18:36.148 "w_mbytes_per_sec": 0 00:18:36.148 }, 00:18:36.148 "claimed": true, 00:18:36.148 "claim_type": "exclusive_write", 00:18:36.148 "zoned": false, 00:18:36.148 "supported_io_types": { 00:18:36.148 "read": true, 00:18:36.148 "write": true, 00:18:36.148 "unmap": true, 00:18:36.148 "flush": true, 00:18:36.148 "reset": true, 00:18:36.148 "nvme_admin": false, 00:18:36.148 "nvme_io": false, 00:18:36.148 "nvme_io_md": false, 00:18:36.148 "write_zeroes": true, 00:18:36.148 "zcopy": true, 00:18:36.148 "get_zone_info": false, 00:18:36.148 "zone_management": false, 00:18:36.148 "zone_append": false, 00:18:36.148 "compare": false, 00:18:36.148 "compare_and_write": false, 00:18:36.148 "abort": true, 00:18:36.148 "seek_hole": false, 00:18:36.148 "seek_data": false, 00:18:36.148 "copy": true, 00:18:36.148 "nvme_iov_md": false 00:18:36.148 }, 00:18:36.148 "memory_domains": [ 00:18:36.148 { 00:18:36.148 "dma_device_id": "system", 00:18:36.149 "dma_device_type": 1 00:18:36.149 }, 00:18:36.149 { 00:18:36.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.149 "dma_device_type": 2 00:18:36.149 } 00:18:36.149 ], 00:18:36.149 "driver_specific": {} 00:18:36.149 } 00:18:36.149 ] 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.149 "name": "Existed_Raid", 00:18:36.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.149 "strip_size_kb": 64, 00:18:36.149 "state": "configuring", 00:18:36.149 "raid_level": "raid5f", 00:18:36.149 "superblock": false, 00:18:36.149 "num_base_bdevs": 4, 00:18:36.149 "num_base_bdevs_discovered": 2, 00:18:36.149 "num_base_bdevs_operational": 4, 00:18:36.149 "base_bdevs_list": [ 00:18:36.149 { 00:18:36.149 "name": "BaseBdev1", 00:18:36.149 "uuid": "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3", 00:18:36.149 "is_configured": true, 00:18:36.149 "data_offset": 0, 00:18:36.149 "data_size": 65536 00:18:36.149 }, 00:18:36.149 { 00:18:36.149 "name": "BaseBdev2", 00:18:36.149 "uuid": "949236f6-52fa-4d0c-9b08-b6d1133d631b", 00:18:36.149 "is_configured": true, 00:18:36.149 "data_offset": 0, 00:18:36.149 "data_size": 65536 00:18:36.149 }, 00:18:36.149 { 00:18:36.149 "name": "BaseBdev3", 00:18:36.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.149 "is_configured": false, 00:18:36.149 "data_offset": 0, 00:18:36.149 "data_size": 0 00:18:36.149 }, 00:18:36.149 { 00:18:36.149 "name": "BaseBdev4", 00:18:36.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.149 "is_configured": false, 00:18:36.149 "data_offset": 0, 00:18:36.149 "data_size": 0 00:18:36.149 } 00:18:36.149 ] 00:18:36.149 }' 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.149 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.406 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:36.406 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.406 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 [2024-10-01 13:52:46.602189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:36.664 BaseBdev3 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 [ 00:18:36.664 { 00:18:36.664 "name": "BaseBdev3", 00:18:36.664 "aliases": [ 00:18:36.664 "6e7eafb5-d89e-4f4f-81bf-f5c7c4fd65f2" 00:18:36.664 ], 00:18:36.664 "product_name": "Malloc disk", 00:18:36.664 "block_size": 512, 00:18:36.664 "num_blocks": 65536, 00:18:36.664 "uuid": "6e7eafb5-d89e-4f4f-81bf-f5c7c4fd65f2", 00:18:36.664 "assigned_rate_limits": { 00:18:36.664 "rw_ios_per_sec": 0, 00:18:36.664 "rw_mbytes_per_sec": 0, 00:18:36.664 "r_mbytes_per_sec": 0, 00:18:36.664 "w_mbytes_per_sec": 0 00:18:36.664 }, 00:18:36.664 "claimed": true, 00:18:36.664 "claim_type": "exclusive_write", 00:18:36.664 "zoned": false, 00:18:36.664 "supported_io_types": { 00:18:36.664 "read": true, 00:18:36.664 "write": true, 00:18:36.664 "unmap": true, 00:18:36.664 "flush": true, 00:18:36.664 "reset": true, 00:18:36.664 "nvme_admin": false, 00:18:36.664 "nvme_io": false, 00:18:36.664 "nvme_io_md": false, 00:18:36.664 "write_zeroes": true, 00:18:36.664 "zcopy": true, 00:18:36.664 "get_zone_info": false, 00:18:36.664 "zone_management": false, 00:18:36.664 "zone_append": false, 00:18:36.664 "compare": false, 00:18:36.664 "compare_and_write": false, 00:18:36.664 "abort": true, 00:18:36.664 "seek_hole": false, 00:18:36.664 "seek_data": false, 00:18:36.664 "copy": true, 00:18:36.664 "nvme_iov_md": false 00:18:36.664 }, 00:18:36.664 "memory_domains": [ 00:18:36.664 { 00:18:36.664 "dma_device_id": "system", 00:18:36.664 "dma_device_type": 1 00:18:36.664 }, 00:18:36.664 { 00:18:36.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.664 "dma_device_type": 2 00:18:36.664 } 00:18:36.664 ], 00:18:36.664 "driver_specific": {} 00:18:36.664 } 00:18:36.664 ] 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.664 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.664 "name": "Existed_Raid", 00:18:36.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.664 "strip_size_kb": 64, 00:18:36.664 "state": "configuring", 00:18:36.664 "raid_level": "raid5f", 00:18:36.664 "superblock": false, 00:18:36.664 "num_base_bdevs": 4, 00:18:36.664 "num_base_bdevs_discovered": 3, 00:18:36.664 "num_base_bdevs_operational": 4, 00:18:36.664 "base_bdevs_list": [ 00:18:36.664 { 00:18:36.664 "name": "BaseBdev1", 00:18:36.664 "uuid": "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3", 00:18:36.664 "is_configured": true, 00:18:36.664 "data_offset": 0, 00:18:36.664 "data_size": 65536 00:18:36.664 }, 00:18:36.664 { 00:18:36.664 "name": "BaseBdev2", 00:18:36.664 "uuid": "949236f6-52fa-4d0c-9b08-b6d1133d631b", 00:18:36.664 "is_configured": true, 00:18:36.664 "data_offset": 0, 00:18:36.664 "data_size": 65536 00:18:36.664 }, 00:18:36.664 { 00:18:36.664 "name": "BaseBdev3", 00:18:36.664 "uuid": "6e7eafb5-d89e-4f4f-81bf-f5c7c4fd65f2", 00:18:36.664 "is_configured": true, 00:18:36.664 "data_offset": 0, 00:18:36.664 "data_size": 65536 00:18:36.664 }, 00:18:36.664 { 00:18:36.664 "name": "BaseBdev4", 00:18:36.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.664 "is_configured": false, 00:18:36.664 "data_offset": 0, 00:18:36.664 "data_size": 0 00:18:36.664 } 00:18:36.664 ] 00:18:36.664 }' 00:18:36.665 13:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.665 13:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.923 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:36.923 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.923 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.181 [2024-10-01 13:52:47.142937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:37.181 [2024-10-01 13:52:47.143013] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:37.181 [2024-10-01 13:52:47.143024] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:37.181 [2024-10-01 13:52:47.143302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:37.181 [2024-10-01 13:52:47.151625] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:37.181 [2024-10-01 13:52:47.151785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:37.181 [2024-10-01 13:52:47.152185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.181 BaseBdev4 00:18:37.181 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.181 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.182 [ 00:18:37.182 { 00:18:37.182 "name": "BaseBdev4", 00:18:37.182 "aliases": [ 00:18:37.182 "e07c260b-0a74-4526-b46c-f4f4dff6fa39" 00:18:37.182 ], 00:18:37.182 "product_name": "Malloc disk", 00:18:37.182 "block_size": 512, 00:18:37.182 "num_blocks": 65536, 00:18:37.182 "uuid": "e07c260b-0a74-4526-b46c-f4f4dff6fa39", 00:18:37.182 "assigned_rate_limits": { 00:18:37.182 "rw_ios_per_sec": 0, 00:18:37.182 "rw_mbytes_per_sec": 0, 00:18:37.182 "r_mbytes_per_sec": 0, 00:18:37.182 "w_mbytes_per_sec": 0 00:18:37.182 }, 00:18:37.182 "claimed": true, 00:18:37.182 "claim_type": "exclusive_write", 00:18:37.182 "zoned": false, 00:18:37.182 "supported_io_types": { 00:18:37.182 "read": true, 00:18:37.182 "write": true, 00:18:37.182 "unmap": true, 00:18:37.182 "flush": true, 00:18:37.182 "reset": true, 00:18:37.182 "nvme_admin": false, 00:18:37.182 "nvme_io": false, 00:18:37.182 "nvme_io_md": false, 00:18:37.182 "write_zeroes": true, 00:18:37.182 "zcopy": true, 00:18:37.182 "get_zone_info": false, 00:18:37.182 "zone_management": false, 00:18:37.182 "zone_append": false, 00:18:37.182 "compare": false, 00:18:37.182 "compare_and_write": false, 00:18:37.182 "abort": true, 00:18:37.182 "seek_hole": false, 00:18:37.182 "seek_data": false, 00:18:37.182 "copy": true, 00:18:37.182 "nvme_iov_md": false 00:18:37.182 }, 00:18:37.182 "memory_domains": [ 00:18:37.182 { 00:18:37.182 "dma_device_id": "system", 00:18:37.182 "dma_device_type": 1 00:18:37.182 }, 00:18:37.182 { 00:18:37.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.182 "dma_device_type": 2 00:18:37.182 } 00:18:37.182 ], 00:18:37.182 "driver_specific": {} 00:18:37.182 } 00:18:37.182 ] 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.182 "name": "Existed_Raid", 00:18:37.182 "uuid": "b8109062-e3bb-48d1-95da-ef165c7b8c28", 00:18:37.182 "strip_size_kb": 64, 00:18:37.182 "state": "online", 00:18:37.182 "raid_level": "raid5f", 00:18:37.182 "superblock": false, 00:18:37.182 "num_base_bdevs": 4, 00:18:37.182 "num_base_bdevs_discovered": 4, 00:18:37.182 "num_base_bdevs_operational": 4, 00:18:37.182 "base_bdevs_list": [ 00:18:37.182 { 00:18:37.182 "name": "BaseBdev1", 00:18:37.182 "uuid": "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3", 00:18:37.182 "is_configured": true, 00:18:37.182 "data_offset": 0, 00:18:37.182 "data_size": 65536 00:18:37.182 }, 00:18:37.182 { 00:18:37.182 "name": "BaseBdev2", 00:18:37.182 "uuid": "949236f6-52fa-4d0c-9b08-b6d1133d631b", 00:18:37.182 "is_configured": true, 00:18:37.182 "data_offset": 0, 00:18:37.182 "data_size": 65536 00:18:37.182 }, 00:18:37.182 { 00:18:37.182 "name": "BaseBdev3", 00:18:37.182 "uuid": "6e7eafb5-d89e-4f4f-81bf-f5c7c4fd65f2", 00:18:37.182 "is_configured": true, 00:18:37.182 "data_offset": 0, 00:18:37.182 "data_size": 65536 00:18:37.182 }, 00:18:37.182 { 00:18:37.182 "name": "BaseBdev4", 00:18:37.182 "uuid": "e07c260b-0a74-4526-b46c-f4f4dff6fa39", 00:18:37.182 "is_configured": true, 00:18:37.182 "data_offset": 0, 00:18:37.182 "data_size": 65536 00:18:37.182 } 00:18:37.182 ] 00:18:37.182 }' 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.182 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.749 [2024-10-01 13:52:47.652676] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.749 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:37.749 "name": "Existed_Raid", 00:18:37.749 "aliases": [ 00:18:37.749 "b8109062-e3bb-48d1-95da-ef165c7b8c28" 00:18:37.749 ], 00:18:37.750 "product_name": "Raid Volume", 00:18:37.750 "block_size": 512, 00:18:37.750 "num_blocks": 196608, 00:18:37.750 "uuid": "b8109062-e3bb-48d1-95da-ef165c7b8c28", 00:18:37.750 "assigned_rate_limits": { 00:18:37.750 "rw_ios_per_sec": 0, 00:18:37.750 "rw_mbytes_per_sec": 0, 00:18:37.750 "r_mbytes_per_sec": 0, 00:18:37.750 "w_mbytes_per_sec": 0 00:18:37.750 }, 00:18:37.750 "claimed": false, 00:18:37.750 "zoned": false, 00:18:37.750 "supported_io_types": { 00:18:37.750 "read": true, 00:18:37.750 "write": true, 00:18:37.750 "unmap": false, 00:18:37.750 "flush": false, 00:18:37.750 "reset": true, 00:18:37.750 "nvme_admin": false, 00:18:37.750 "nvme_io": false, 00:18:37.750 "nvme_io_md": false, 00:18:37.750 "write_zeroes": true, 00:18:37.750 "zcopy": false, 00:18:37.750 "get_zone_info": false, 00:18:37.750 "zone_management": false, 00:18:37.750 "zone_append": false, 00:18:37.750 "compare": false, 00:18:37.750 "compare_and_write": false, 00:18:37.750 "abort": false, 00:18:37.750 "seek_hole": false, 00:18:37.750 "seek_data": false, 00:18:37.750 "copy": false, 00:18:37.750 "nvme_iov_md": false 00:18:37.750 }, 00:18:37.750 "driver_specific": { 00:18:37.750 "raid": { 00:18:37.750 "uuid": "b8109062-e3bb-48d1-95da-ef165c7b8c28", 00:18:37.750 "strip_size_kb": 64, 00:18:37.750 "state": "online", 00:18:37.750 "raid_level": "raid5f", 00:18:37.750 "superblock": false, 00:18:37.750 "num_base_bdevs": 4, 00:18:37.750 "num_base_bdevs_discovered": 4, 00:18:37.750 "num_base_bdevs_operational": 4, 00:18:37.750 "base_bdevs_list": [ 00:18:37.750 { 00:18:37.750 "name": "BaseBdev1", 00:18:37.750 "uuid": "7fbc405c-dd5c-4bec-8c84-e1ec7ffe89c3", 00:18:37.750 "is_configured": true, 00:18:37.750 "data_offset": 0, 00:18:37.750 "data_size": 65536 00:18:37.750 }, 00:18:37.750 { 00:18:37.750 "name": "BaseBdev2", 00:18:37.750 "uuid": "949236f6-52fa-4d0c-9b08-b6d1133d631b", 00:18:37.750 "is_configured": true, 00:18:37.750 "data_offset": 0, 00:18:37.750 "data_size": 65536 00:18:37.750 }, 00:18:37.750 { 00:18:37.750 "name": "BaseBdev3", 00:18:37.750 "uuid": "6e7eafb5-d89e-4f4f-81bf-f5c7c4fd65f2", 00:18:37.750 "is_configured": true, 00:18:37.750 "data_offset": 0, 00:18:37.750 "data_size": 65536 00:18:37.750 }, 00:18:37.750 { 00:18:37.750 "name": "BaseBdev4", 00:18:37.750 "uuid": "e07c260b-0a74-4526-b46c-f4f4dff6fa39", 00:18:37.750 "is_configured": true, 00:18:37.750 "data_offset": 0, 00:18:37.750 "data_size": 65536 00:18:37.750 } 00:18:37.750 ] 00:18:37.750 } 00:18:37.750 } 00:18:37.750 }' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:37.750 BaseBdev2 00:18:37.750 BaseBdev3 00:18:37.750 BaseBdev4' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.750 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.008 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.008 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.008 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.008 13:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:38.008 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.008 13:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.008 [2024-10-01 13:52:47.976033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.008 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.009 "name": "Existed_Raid", 00:18:38.009 "uuid": "b8109062-e3bb-48d1-95da-ef165c7b8c28", 00:18:38.009 "strip_size_kb": 64, 00:18:38.009 "state": "online", 00:18:38.009 "raid_level": "raid5f", 00:18:38.009 "superblock": false, 00:18:38.009 "num_base_bdevs": 4, 00:18:38.009 "num_base_bdevs_discovered": 3, 00:18:38.009 "num_base_bdevs_operational": 3, 00:18:38.009 "base_bdevs_list": [ 00:18:38.009 { 00:18:38.009 "name": null, 00:18:38.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.009 "is_configured": false, 00:18:38.009 "data_offset": 0, 00:18:38.009 "data_size": 65536 00:18:38.009 }, 00:18:38.009 { 00:18:38.009 "name": "BaseBdev2", 00:18:38.009 "uuid": "949236f6-52fa-4d0c-9b08-b6d1133d631b", 00:18:38.009 "is_configured": true, 00:18:38.009 "data_offset": 0, 00:18:38.009 "data_size": 65536 00:18:38.009 }, 00:18:38.009 { 00:18:38.009 "name": "BaseBdev3", 00:18:38.009 "uuid": "6e7eafb5-d89e-4f4f-81bf-f5c7c4fd65f2", 00:18:38.009 "is_configured": true, 00:18:38.009 "data_offset": 0, 00:18:38.009 "data_size": 65536 00:18:38.009 }, 00:18:38.009 { 00:18:38.009 "name": "BaseBdev4", 00:18:38.009 "uuid": "e07c260b-0a74-4526-b46c-f4f4dff6fa39", 00:18:38.009 "is_configured": true, 00:18:38.009 "data_offset": 0, 00:18:38.009 "data_size": 65536 00:18:38.009 } 00:18:38.009 ] 00:18:38.009 }' 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.009 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.617 [2024-10-01 13:52:48.545528] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:38.617 [2024-10-01 13:52:48.545638] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.617 [2024-10-01 13:52:48.647892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.617 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.617 [2024-10-01 13:52:48.703871] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.876 [2024-10-01 13:52:48.862475] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:38.876 [2024-10-01 13:52:48.862529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.876 13:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.876 BaseBdev2 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.876 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.135 [ 00:18:39.135 { 00:18:39.135 "name": "BaseBdev2", 00:18:39.135 "aliases": [ 00:18:39.135 "81681a3c-c48d-458a-ae44-f49ca1f18755" 00:18:39.135 ], 00:18:39.135 "product_name": "Malloc disk", 00:18:39.135 "block_size": 512, 00:18:39.135 "num_blocks": 65536, 00:18:39.135 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:39.135 "assigned_rate_limits": { 00:18:39.135 "rw_ios_per_sec": 0, 00:18:39.135 "rw_mbytes_per_sec": 0, 00:18:39.135 "r_mbytes_per_sec": 0, 00:18:39.135 "w_mbytes_per_sec": 0 00:18:39.135 }, 00:18:39.135 "claimed": false, 00:18:39.135 "zoned": false, 00:18:39.135 "supported_io_types": { 00:18:39.135 "read": true, 00:18:39.135 "write": true, 00:18:39.135 "unmap": true, 00:18:39.135 "flush": true, 00:18:39.135 "reset": true, 00:18:39.135 "nvme_admin": false, 00:18:39.135 "nvme_io": false, 00:18:39.135 "nvme_io_md": false, 00:18:39.135 "write_zeroes": true, 00:18:39.135 "zcopy": true, 00:18:39.135 "get_zone_info": false, 00:18:39.135 "zone_management": false, 00:18:39.135 "zone_append": false, 00:18:39.135 "compare": false, 00:18:39.135 "compare_and_write": false, 00:18:39.135 "abort": true, 00:18:39.135 "seek_hole": false, 00:18:39.135 "seek_data": false, 00:18:39.135 "copy": true, 00:18:39.135 "nvme_iov_md": false 00:18:39.135 }, 00:18:39.135 "memory_domains": [ 00:18:39.135 { 00:18:39.135 "dma_device_id": "system", 00:18:39.135 "dma_device_type": 1 00:18:39.135 }, 00:18:39.135 { 00:18:39.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.135 "dma_device_type": 2 00:18:39.135 } 00:18:39.135 ], 00:18:39.135 "driver_specific": {} 00:18:39.135 } 00:18:39.135 ] 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.135 BaseBdev3 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:39.135 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 [ 00:18:39.136 { 00:18:39.136 "name": "BaseBdev3", 00:18:39.136 "aliases": [ 00:18:39.136 "27d2a084-2566-4362-b2ae-ca0835080e9c" 00:18:39.136 ], 00:18:39.136 "product_name": "Malloc disk", 00:18:39.136 "block_size": 512, 00:18:39.136 "num_blocks": 65536, 00:18:39.136 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:39.136 "assigned_rate_limits": { 00:18:39.136 "rw_ios_per_sec": 0, 00:18:39.136 "rw_mbytes_per_sec": 0, 00:18:39.136 "r_mbytes_per_sec": 0, 00:18:39.136 "w_mbytes_per_sec": 0 00:18:39.136 }, 00:18:39.136 "claimed": false, 00:18:39.136 "zoned": false, 00:18:39.136 "supported_io_types": { 00:18:39.136 "read": true, 00:18:39.136 "write": true, 00:18:39.136 "unmap": true, 00:18:39.136 "flush": true, 00:18:39.136 "reset": true, 00:18:39.136 "nvme_admin": false, 00:18:39.136 "nvme_io": false, 00:18:39.136 "nvme_io_md": false, 00:18:39.136 "write_zeroes": true, 00:18:39.136 "zcopy": true, 00:18:39.136 "get_zone_info": false, 00:18:39.136 "zone_management": false, 00:18:39.136 "zone_append": false, 00:18:39.136 "compare": false, 00:18:39.136 "compare_and_write": false, 00:18:39.136 "abort": true, 00:18:39.136 "seek_hole": false, 00:18:39.136 "seek_data": false, 00:18:39.136 "copy": true, 00:18:39.136 "nvme_iov_md": false 00:18:39.136 }, 00:18:39.136 "memory_domains": [ 00:18:39.136 { 00:18:39.136 "dma_device_id": "system", 00:18:39.136 "dma_device_type": 1 00:18:39.136 }, 00:18:39.136 { 00:18:39.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.136 "dma_device_type": 2 00:18:39.136 } 00:18:39.136 ], 00:18:39.136 "driver_specific": {} 00:18:39.136 } 00:18:39.136 ] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 BaseBdev4 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 [ 00:18:39.136 { 00:18:39.136 "name": "BaseBdev4", 00:18:39.136 "aliases": [ 00:18:39.136 "143cb2e3-053d-410e-aac4-0ee58cf9bcc3" 00:18:39.136 ], 00:18:39.136 "product_name": "Malloc disk", 00:18:39.136 "block_size": 512, 00:18:39.136 "num_blocks": 65536, 00:18:39.136 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:39.136 "assigned_rate_limits": { 00:18:39.136 "rw_ios_per_sec": 0, 00:18:39.136 "rw_mbytes_per_sec": 0, 00:18:39.136 "r_mbytes_per_sec": 0, 00:18:39.136 "w_mbytes_per_sec": 0 00:18:39.136 }, 00:18:39.136 "claimed": false, 00:18:39.136 "zoned": false, 00:18:39.136 "supported_io_types": { 00:18:39.136 "read": true, 00:18:39.136 "write": true, 00:18:39.136 "unmap": true, 00:18:39.136 "flush": true, 00:18:39.136 "reset": true, 00:18:39.136 "nvme_admin": false, 00:18:39.136 "nvme_io": false, 00:18:39.136 "nvme_io_md": false, 00:18:39.136 "write_zeroes": true, 00:18:39.136 "zcopy": true, 00:18:39.136 "get_zone_info": false, 00:18:39.136 "zone_management": false, 00:18:39.136 "zone_append": false, 00:18:39.136 "compare": false, 00:18:39.136 "compare_and_write": false, 00:18:39.136 "abort": true, 00:18:39.136 "seek_hole": false, 00:18:39.136 "seek_data": false, 00:18:39.136 "copy": true, 00:18:39.136 "nvme_iov_md": false 00:18:39.136 }, 00:18:39.136 "memory_domains": [ 00:18:39.136 { 00:18:39.136 "dma_device_id": "system", 00:18:39.136 "dma_device_type": 1 00:18:39.136 }, 00:18:39.136 { 00:18:39.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.136 "dma_device_type": 2 00:18:39.136 } 00:18:39.136 ], 00:18:39.136 "driver_specific": {} 00:18:39.136 } 00:18:39.136 ] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 [2024-10-01 13:52:49.287272] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.136 [2024-10-01 13:52:49.287485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.136 [2024-10-01 13:52:49.287640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.136 [2024-10-01 13:52:49.289854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.136 [2024-10-01 13:52:49.290035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.395 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.395 "name": "Existed_Raid", 00:18:39.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.395 "strip_size_kb": 64, 00:18:39.395 "state": "configuring", 00:18:39.395 "raid_level": "raid5f", 00:18:39.395 "superblock": false, 00:18:39.395 "num_base_bdevs": 4, 00:18:39.395 "num_base_bdevs_discovered": 3, 00:18:39.395 "num_base_bdevs_operational": 4, 00:18:39.395 "base_bdevs_list": [ 00:18:39.395 { 00:18:39.395 "name": "BaseBdev1", 00:18:39.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.395 "is_configured": false, 00:18:39.395 "data_offset": 0, 00:18:39.395 "data_size": 0 00:18:39.395 }, 00:18:39.395 { 00:18:39.395 "name": "BaseBdev2", 00:18:39.395 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:39.395 "is_configured": true, 00:18:39.395 "data_offset": 0, 00:18:39.395 "data_size": 65536 00:18:39.395 }, 00:18:39.395 { 00:18:39.395 "name": "BaseBdev3", 00:18:39.395 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:39.395 "is_configured": true, 00:18:39.395 "data_offset": 0, 00:18:39.395 "data_size": 65536 00:18:39.395 }, 00:18:39.395 { 00:18:39.395 "name": "BaseBdev4", 00:18:39.395 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:39.395 "is_configured": true, 00:18:39.395 "data_offset": 0, 00:18:39.395 "data_size": 65536 00:18:39.395 } 00:18:39.395 ] 00:18:39.395 }' 00:18:39.395 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.395 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.654 [2024-10-01 13:52:49.722630] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.654 "name": "Existed_Raid", 00:18:39.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.654 "strip_size_kb": 64, 00:18:39.654 "state": "configuring", 00:18:39.654 "raid_level": "raid5f", 00:18:39.654 "superblock": false, 00:18:39.654 "num_base_bdevs": 4, 00:18:39.654 "num_base_bdevs_discovered": 2, 00:18:39.654 "num_base_bdevs_operational": 4, 00:18:39.654 "base_bdevs_list": [ 00:18:39.654 { 00:18:39.654 "name": "BaseBdev1", 00:18:39.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.654 "is_configured": false, 00:18:39.654 "data_offset": 0, 00:18:39.654 "data_size": 0 00:18:39.654 }, 00:18:39.654 { 00:18:39.654 "name": null, 00:18:39.654 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:39.654 "is_configured": false, 00:18:39.654 "data_offset": 0, 00:18:39.654 "data_size": 65536 00:18:39.654 }, 00:18:39.654 { 00:18:39.654 "name": "BaseBdev3", 00:18:39.654 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:39.654 "is_configured": true, 00:18:39.654 "data_offset": 0, 00:18:39.654 "data_size": 65536 00:18:39.654 }, 00:18:39.654 { 00:18:39.654 "name": "BaseBdev4", 00:18:39.654 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:39.654 "is_configured": true, 00:18:39.654 "data_offset": 0, 00:18:39.654 "data_size": 65536 00:18:39.654 } 00:18:39.654 ] 00:18:39.654 }' 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.654 13:52:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.222 [2024-10-01 13:52:50.269178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.222 BaseBdev1 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:40.222 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.223 [ 00:18:40.223 { 00:18:40.223 "name": "BaseBdev1", 00:18:40.223 "aliases": [ 00:18:40.223 "0c34ac3c-4d04-487c-998b-fa590240ed27" 00:18:40.223 ], 00:18:40.223 "product_name": "Malloc disk", 00:18:40.223 "block_size": 512, 00:18:40.223 "num_blocks": 65536, 00:18:40.223 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:40.223 "assigned_rate_limits": { 00:18:40.223 "rw_ios_per_sec": 0, 00:18:40.223 "rw_mbytes_per_sec": 0, 00:18:40.223 "r_mbytes_per_sec": 0, 00:18:40.223 "w_mbytes_per_sec": 0 00:18:40.223 }, 00:18:40.223 "claimed": true, 00:18:40.223 "claim_type": "exclusive_write", 00:18:40.223 "zoned": false, 00:18:40.223 "supported_io_types": { 00:18:40.223 "read": true, 00:18:40.223 "write": true, 00:18:40.223 "unmap": true, 00:18:40.223 "flush": true, 00:18:40.223 "reset": true, 00:18:40.223 "nvme_admin": false, 00:18:40.223 "nvme_io": false, 00:18:40.223 "nvme_io_md": false, 00:18:40.223 "write_zeroes": true, 00:18:40.223 "zcopy": true, 00:18:40.223 "get_zone_info": false, 00:18:40.223 "zone_management": false, 00:18:40.223 "zone_append": false, 00:18:40.223 "compare": false, 00:18:40.223 "compare_and_write": false, 00:18:40.223 "abort": true, 00:18:40.223 "seek_hole": false, 00:18:40.223 "seek_data": false, 00:18:40.223 "copy": true, 00:18:40.223 "nvme_iov_md": false 00:18:40.223 }, 00:18:40.223 "memory_domains": [ 00:18:40.223 { 00:18:40.223 "dma_device_id": "system", 00:18:40.223 "dma_device_type": 1 00:18:40.223 }, 00:18:40.223 { 00:18:40.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.223 "dma_device_type": 2 00:18:40.223 } 00:18:40.223 ], 00:18:40.223 "driver_specific": {} 00:18:40.223 } 00:18:40.223 ] 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.223 "name": "Existed_Raid", 00:18:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.223 "strip_size_kb": 64, 00:18:40.223 "state": "configuring", 00:18:40.223 "raid_level": "raid5f", 00:18:40.223 "superblock": false, 00:18:40.223 "num_base_bdevs": 4, 00:18:40.223 "num_base_bdevs_discovered": 3, 00:18:40.223 "num_base_bdevs_operational": 4, 00:18:40.223 "base_bdevs_list": [ 00:18:40.223 { 00:18:40.223 "name": "BaseBdev1", 00:18:40.223 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:40.223 "is_configured": true, 00:18:40.223 "data_offset": 0, 00:18:40.223 "data_size": 65536 00:18:40.223 }, 00:18:40.223 { 00:18:40.223 "name": null, 00:18:40.223 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:40.223 "is_configured": false, 00:18:40.223 "data_offset": 0, 00:18:40.223 "data_size": 65536 00:18:40.223 }, 00:18:40.223 { 00:18:40.223 "name": "BaseBdev3", 00:18:40.223 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:40.223 "is_configured": true, 00:18:40.223 "data_offset": 0, 00:18:40.223 "data_size": 65536 00:18:40.223 }, 00:18:40.223 { 00:18:40.223 "name": "BaseBdev4", 00:18:40.223 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:40.223 "is_configured": true, 00:18:40.223 "data_offset": 0, 00:18:40.223 "data_size": 65536 00:18:40.223 } 00:18:40.223 ] 00:18:40.223 }' 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.223 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:40.791 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.792 [2024-10-01 13:52:50.824548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.792 "name": "Existed_Raid", 00:18:40.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.792 "strip_size_kb": 64, 00:18:40.792 "state": "configuring", 00:18:40.792 "raid_level": "raid5f", 00:18:40.792 "superblock": false, 00:18:40.792 "num_base_bdevs": 4, 00:18:40.792 "num_base_bdevs_discovered": 2, 00:18:40.792 "num_base_bdevs_operational": 4, 00:18:40.792 "base_bdevs_list": [ 00:18:40.792 { 00:18:40.792 "name": "BaseBdev1", 00:18:40.792 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:40.792 "is_configured": true, 00:18:40.792 "data_offset": 0, 00:18:40.792 "data_size": 65536 00:18:40.792 }, 00:18:40.792 { 00:18:40.792 "name": null, 00:18:40.792 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:40.792 "is_configured": false, 00:18:40.792 "data_offset": 0, 00:18:40.792 "data_size": 65536 00:18:40.792 }, 00:18:40.792 { 00:18:40.792 "name": null, 00:18:40.792 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:40.792 "is_configured": false, 00:18:40.792 "data_offset": 0, 00:18:40.792 "data_size": 65536 00:18:40.792 }, 00:18:40.792 { 00:18:40.792 "name": "BaseBdev4", 00:18:40.792 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:40.792 "is_configured": true, 00:18:40.792 "data_offset": 0, 00:18:40.792 "data_size": 65536 00:18:40.792 } 00:18:40.792 ] 00:18:40.792 }' 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.792 13:52:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.359 [2024-10-01 13:52:51.323888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.359 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.360 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.360 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.360 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.360 "name": "Existed_Raid", 00:18:41.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.360 "strip_size_kb": 64, 00:18:41.360 "state": "configuring", 00:18:41.360 "raid_level": "raid5f", 00:18:41.360 "superblock": false, 00:18:41.360 "num_base_bdevs": 4, 00:18:41.360 "num_base_bdevs_discovered": 3, 00:18:41.360 "num_base_bdevs_operational": 4, 00:18:41.360 "base_bdevs_list": [ 00:18:41.360 { 00:18:41.360 "name": "BaseBdev1", 00:18:41.360 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:41.360 "is_configured": true, 00:18:41.360 "data_offset": 0, 00:18:41.360 "data_size": 65536 00:18:41.360 }, 00:18:41.360 { 00:18:41.360 "name": null, 00:18:41.360 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:41.360 "is_configured": false, 00:18:41.360 "data_offset": 0, 00:18:41.360 "data_size": 65536 00:18:41.360 }, 00:18:41.360 { 00:18:41.360 "name": "BaseBdev3", 00:18:41.360 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:41.360 "is_configured": true, 00:18:41.360 "data_offset": 0, 00:18:41.360 "data_size": 65536 00:18:41.360 }, 00:18:41.360 { 00:18:41.360 "name": "BaseBdev4", 00:18:41.360 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:41.360 "is_configured": true, 00:18:41.360 "data_offset": 0, 00:18:41.360 "data_size": 65536 00:18:41.360 } 00:18:41.360 ] 00:18:41.360 }' 00:18:41.360 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.360 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.618 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.876 [2024-10-01 13:52:51.811780] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.876 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.877 "name": "Existed_Raid", 00:18:41.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.877 "strip_size_kb": 64, 00:18:41.877 "state": "configuring", 00:18:41.877 "raid_level": "raid5f", 00:18:41.877 "superblock": false, 00:18:41.877 "num_base_bdevs": 4, 00:18:41.877 "num_base_bdevs_discovered": 2, 00:18:41.877 "num_base_bdevs_operational": 4, 00:18:41.877 "base_bdevs_list": [ 00:18:41.877 { 00:18:41.877 "name": null, 00:18:41.877 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:41.877 "is_configured": false, 00:18:41.877 "data_offset": 0, 00:18:41.877 "data_size": 65536 00:18:41.877 }, 00:18:41.877 { 00:18:41.877 "name": null, 00:18:41.877 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:41.877 "is_configured": false, 00:18:41.877 "data_offset": 0, 00:18:41.877 "data_size": 65536 00:18:41.877 }, 00:18:41.877 { 00:18:41.877 "name": "BaseBdev3", 00:18:41.877 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:41.877 "is_configured": true, 00:18:41.877 "data_offset": 0, 00:18:41.877 "data_size": 65536 00:18:41.877 }, 00:18:41.877 { 00:18:41.877 "name": "BaseBdev4", 00:18:41.877 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:41.877 "is_configured": true, 00:18:41.877 "data_offset": 0, 00:18:41.877 "data_size": 65536 00:18:41.877 } 00:18:41.877 ] 00:18:41.877 }' 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.877 13:52:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.444 [2024-10-01 13:52:52.387701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.444 "name": "Existed_Raid", 00:18:42.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.444 "strip_size_kb": 64, 00:18:42.444 "state": "configuring", 00:18:42.444 "raid_level": "raid5f", 00:18:42.444 "superblock": false, 00:18:42.444 "num_base_bdevs": 4, 00:18:42.444 "num_base_bdevs_discovered": 3, 00:18:42.444 "num_base_bdevs_operational": 4, 00:18:42.444 "base_bdevs_list": [ 00:18:42.444 { 00:18:42.444 "name": null, 00:18:42.444 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:42.444 "is_configured": false, 00:18:42.444 "data_offset": 0, 00:18:42.444 "data_size": 65536 00:18:42.444 }, 00:18:42.444 { 00:18:42.444 "name": "BaseBdev2", 00:18:42.444 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:42.444 "is_configured": true, 00:18:42.444 "data_offset": 0, 00:18:42.444 "data_size": 65536 00:18:42.444 }, 00:18:42.444 { 00:18:42.444 "name": "BaseBdev3", 00:18:42.444 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:42.444 "is_configured": true, 00:18:42.444 "data_offset": 0, 00:18:42.444 "data_size": 65536 00:18:42.444 }, 00:18:42.444 { 00:18:42.444 "name": "BaseBdev4", 00:18:42.444 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:42.444 "is_configured": true, 00:18:42.444 "data_offset": 0, 00:18:42.444 "data_size": 65536 00:18:42.444 } 00:18:42.444 ] 00:18:42.444 }' 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.444 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.703 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c34ac3c-4d04-487c-998b-fa590240ed27 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.989 [2024-10-01 13:52:52.962873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:42.989 [2024-10-01 13:52:52.963118] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:42.989 [2024-10-01 13:52:52.963140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:42.989 [2024-10-01 13:52:52.963488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:42.989 [2024-10-01 13:52:52.971170] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:42.989 [2024-10-01 13:52:52.971305] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:42.989 [2024-10-01 13:52:52.971714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.989 NewBaseBdev 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.989 13:52:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.989 [ 00:18:42.989 { 00:18:42.989 "name": "NewBaseBdev", 00:18:42.989 "aliases": [ 00:18:42.989 "0c34ac3c-4d04-487c-998b-fa590240ed27" 00:18:42.989 ], 00:18:42.989 "product_name": "Malloc disk", 00:18:42.989 "block_size": 512, 00:18:42.989 "num_blocks": 65536, 00:18:42.989 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:42.989 "assigned_rate_limits": { 00:18:42.989 "rw_ios_per_sec": 0, 00:18:42.989 "rw_mbytes_per_sec": 0, 00:18:42.989 "r_mbytes_per_sec": 0, 00:18:42.989 "w_mbytes_per_sec": 0 00:18:42.989 }, 00:18:42.989 "claimed": true, 00:18:42.989 "claim_type": "exclusive_write", 00:18:42.989 "zoned": false, 00:18:42.989 "supported_io_types": { 00:18:42.989 "read": true, 00:18:42.989 "write": true, 00:18:42.989 "unmap": true, 00:18:42.989 "flush": true, 00:18:42.989 "reset": true, 00:18:42.989 "nvme_admin": false, 00:18:42.989 "nvme_io": false, 00:18:42.989 "nvme_io_md": false, 00:18:42.989 "write_zeroes": true, 00:18:42.989 "zcopy": true, 00:18:42.989 "get_zone_info": false, 00:18:42.989 "zone_management": false, 00:18:42.989 "zone_append": false, 00:18:42.989 "compare": false, 00:18:42.989 "compare_and_write": false, 00:18:42.989 "abort": true, 00:18:42.989 "seek_hole": false, 00:18:42.989 "seek_data": false, 00:18:42.989 "copy": true, 00:18:42.989 "nvme_iov_md": false 00:18:42.989 }, 00:18:42.989 "memory_domains": [ 00:18:42.990 { 00:18:42.990 "dma_device_id": "system", 00:18:42.990 "dma_device_type": 1 00:18:42.990 }, 00:18:42.990 { 00:18:42.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.990 "dma_device_type": 2 00:18:42.990 } 00:18:42.990 ], 00:18:42.990 "driver_specific": {} 00:18:42.990 } 00:18:42.990 ] 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.990 "name": "Existed_Raid", 00:18:42.990 "uuid": "56f75754-be87-4622-aaa9-be6981d9a0f5", 00:18:42.990 "strip_size_kb": 64, 00:18:42.990 "state": "online", 00:18:42.990 "raid_level": "raid5f", 00:18:42.990 "superblock": false, 00:18:42.990 "num_base_bdevs": 4, 00:18:42.990 "num_base_bdevs_discovered": 4, 00:18:42.990 "num_base_bdevs_operational": 4, 00:18:42.990 "base_bdevs_list": [ 00:18:42.990 { 00:18:42.990 "name": "NewBaseBdev", 00:18:42.990 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:42.990 "is_configured": true, 00:18:42.990 "data_offset": 0, 00:18:42.990 "data_size": 65536 00:18:42.990 }, 00:18:42.990 { 00:18:42.990 "name": "BaseBdev2", 00:18:42.990 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:42.990 "is_configured": true, 00:18:42.990 "data_offset": 0, 00:18:42.990 "data_size": 65536 00:18:42.990 }, 00:18:42.990 { 00:18:42.990 "name": "BaseBdev3", 00:18:42.990 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:42.990 "is_configured": true, 00:18:42.990 "data_offset": 0, 00:18:42.990 "data_size": 65536 00:18:42.990 }, 00:18:42.990 { 00:18:42.990 "name": "BaseBdev4", 00:18:42.990 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:42.990 "is_configured": true, 00:18:42.990 "data_offset": 0, 00:18:42.990 "data_size": 65536 00:18:42.990 } 00:18:42.990 ] 00:18:42.990 }' 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.990 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:43.249 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:43.249 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:43.249 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:43.249 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:43.249 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 [2024-10-01 13:52:53.448353] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.508 "name": "Existed_Raid", 00:18:43.508 "aliases": [ 00:18:43.508 "56f75754-be87-4622-aaa9-be6981d9a0f5" 00:18:43.508 ], 00:18:43.508 "product_name": "Raid Volume", 00:18:43.508 "block_size": 512, 00:18:43.508 "num_blocks": 196608, 00:18:43.508 "uuid": "56f75754-be87-4622-aaa9-be6981d9a0f5", 00:18:43.508 "assigned_rate_limits": { 00:18:43.508 "rw_ios_per_sec": 0, 00:18:43.508 "rw_mbytes_per_sec": 0, 00:18:43.508 "r_mbytes_per_sec": 0, 00:18:43.508 "w_mbytes_per_sec": 0 00:18:43.508 }, 00:18:43.508 "claimed": false, 00:18:43.508 "zoned": false, 00:18:43.508 "supported_io_types": { 00:18:43.508 "read": true, 00:18:43.508 "write": true, 00:18:43.508 "unmap": false, 00:18:43.508 "flush": false, 00:18:43.508 "reset": true, 00:18:43.508 "nvme_admin": false, 00:18:43.508 "nvme_io": false, 00:18:43.508 "nvme_io_md": false, 00:18:43.508 "write_zeroes": true, 00:18:43.508 "zcopy": false, 00:18:43.508 "get_zone_info": false, 00:18:43.508 "zone_management": false, 00:18:43.508 "zone_append": false, 00:18:43.508 "compare": false, 00:18:43.508 "compare_and_write": false, 00:18:43.508 "abort": false, 00:18:43.508 "seek_hole": false, 00:18:43.508 "seek_data": false, 00:18:43.508 "copy": false, 00:18:43.508 "nvme_iov_md": false 00:18:43.508 }, 00:18:43.508 "driver_specific": { 00:18:43.508 "raid": { 00:18:43.508 "uuid": "56f75754-be87-4622-aaa9-be6981d9a0f5", 00:18:43.508 "strip_size_kb": 64, 00:18:43.508 "state": "online", 00:18:43.508 "raid_level": "raid5f", 00:18:43.508 "superblock": false, 00:18:43.508 "num_base_bdevs": 4, 00:18:43.508 "num_base_bdevs_discovered": 4, 00:18:43.508 "num_base_bdevs_operational": 4, 00:18:43.508 "base_bdevs_list": [ 00:18:43.508 { 00:18:43.508 "name": "NewBaseBdev", 00:18:43.508 "uuid": "0c34ac3c-4d04-487c-998b-fa590240ed27", 00:18:43.508 "is_configured": true, 00:18:43.508 "data_offset": 0, 00:18:43.508 "data_size": 65536 00:18:43.508 }, 00:18:43.508 { 00:18:43.508 "name": "BaseBdev2", 00:18:43.508 "uuid": "81681a3c-c48d-458a-ae44-f49ca1f18755", 00:18:43.508 "is_configured": true, 00:18:43.508 "data_offset": 0, 00:18:43.508 "data_size": 65536 00:18:43.508 }, 00:18:43.508 { 00:18:43.508 "name": "BaseBdev3", 00:18:43.508 "uuid": "27d2a084-2566-4362-b2ae-ca0835080e9c", 00:18:43.508 "is_configured": true, 00:18:43.508 "data_offset": 0, 00:18:43.508 "data_size": 65536 00:18:43.508 }, 00:18:43.508 { 00:18:43.508 "name": "BaseBdev4", 00:18:43.508 "uuid": "143cb2e3-053d-410e-aac4-0ee58cf9bcc3", 00:18:43.508 "is_configured": true, 00:18:43.508 "data_offset": 0, 00:18:43.508 "data_size": 65536 00:18:43.508 } 00:18:43.508 ] 00:18:43.508 } 00:18:43.508 } 00:18:43.508 }' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:43.508 BaseBdev2 00:18:43.508 BaseBdev3 00:18:43.508 BaseBdev4' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 [2024-10-01 13:52:53.795645] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:43.768 [2024-10-01 13:52:53.796656] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.768 [2024-10-01 13:52:53.796774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.768 [2024-10-01 13:52:53.797101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.768 [2024-10-01 13:52:53.797117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82813 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82813 ']' 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82813 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82813 00:18:43.768 killing process with pid 82813 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82813' 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 82813 00:18:43.768 [2024-10-01 13:52:53.853565] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.768 13:52:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 82813 00:18:44.336 [2024-10-01 13:52:54.268529] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.714 13:52:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:45.714 00:18:45.714 real 0m12.078s 00:18:45.714 user 0m19.079s 00:18:45.714 sys 0m2.486s 00:18:45.714 13:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:45.714 ************************************ 00:18:45.715 END TEST raid5f_state_function_test 00:18:45.715 ************************************ 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.715 13:52:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:45.715 13:52:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:45.715 13:52:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:45.715 13:52:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.715 ************************************ 00:18:45.715 START TEST raid5f_state_function_test_sb 00:18:45.715 ************************************ 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83485 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83485' 00:18:45.715 Process raid pid: 83485 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83485 00:18:45.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83485 ']' 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.715 13:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.715 [2024-10-01 13:52:55.792641] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:45.715 [2024-10-01 13:52:55.792779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.974 [2024-10-01 13:52:55.970015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.232 [2024-10-01 13:52:56.199991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.232 [2024-10-01 13:52:56.420629] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.232 [2024-10-01 13:52:56.420706] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.490 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.490 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:46.490 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:46.490 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.490 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.490 [2024-10-01 13:52:56.652780] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.490 [2024-10-01 13:52:56.653018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.490 [2024-10-01 13:52:56.653135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.490 [2024-10-01 13:52:56.653185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.490 [2024-10-01 13:52:56.653312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:46.490 [2024-10-01 13:52:56.653346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:46.491 [2024-10-01 13:52:56.653355] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:46.491 [2024-10-01 13:52:56.653368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.491 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.748 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.748 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.748 "name": "Existed_Raid", 00:18:46.748 "uuid": "07b0ef36-f735-4091-addc-97571f30f851", 00:18:46.748 "strip_size_kb": 64, 00:18:46.748 "state": "configuring", 00:18:46.748 "raid_level": "raid5f", 00:18:46.748 "superblock": true, 00:18:46.748 "num_base_bdevs": 4, 00:18:46.748 "num_base_bdevs_discovered": 0, 00:18:46.748 "num_base_bdevs_operational": 4, 00:18:46.748 "base_bdevs_list": [ 00:18:46.748 { 00:18:46.748 "name": "BaseBdev1", 00:18:46.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.748 "is_configured": false, 00:18:46.748 "data_offset": 0, 00:18:46.748 "data_size": 0 00:18:46.748 }, 00:18:46.748 { 00:18:46.748 "name": "BaseBdev2", 00:18:46.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.748 "is_configured": false, 00:18:46.748 "data_offset": 0, 00:18:46.748 "data_size": 0 00:18:46.748 }, 00:18:46.748 { 00:18:46.748 "name": "BaseBdev3", 00:18:46.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.748 "is_configured": false, 00:18:46.748 "data_offset": 0, 00:18:46.748 "data_size": 0 00:18:46.748 }, 00:18:46.748 { 00:18:46.748 "name": "BaseBdev4", 00:18:46.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.748 "is_configured": false, 00:18:46.748 "data_offset": 0, 00:18:46.748 "data_size": 0 00:18:46.748 } 00:18:46.748 ] 00:18:46.748 }' 00:18:46.748 13:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.748 13:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.006 [2024-10-01 13:52:57.092070] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.006 [2024-10-01 13:52:57.092118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.006 [2024-10-01 13:52:57.104093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.006 [2024-10-01 13:52:57.104145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.006 [2024-10-01 13:52:57.104156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.006 [2024-10-01 13:52:57.104170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.006 [2024-10-01 13:52:57.104178] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.006 [2024-10-01 13:52:57.104192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.006 [2024-10-01 13:52:57.104200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.006 [2024-10-01 13:52:57.104213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.006 [2024-10-01 13:52:57.167661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.006 BaseBdev1 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.006 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.264 [ 00:18:47.264 { 00:18:47.264 "name": "BaseBdev1", 00:18:47.264 "aliases": [ 00:18:47.264 "4b5fd126-1fa5-4714-941e-53ef35fadbb6" 00:18:47.264 ], 00:18:47.264 "product_name": "Malloc disk", 00:18:47.264 "block_size": 512, 00:18:47.264 "num_blocks": 65536, 00:18:47.264 "uuid": "4b5fd126-1fa5-4714-941e-53ef35fadbb6", 00:18:47.264 "assigned_rate_limits": { 00:18:47.264 "rw_ios_per_sec": 0, 00:18:47.264 "rw_mbytes_per_sec": 0, 00:18:47.264 "r_mbytes_per_sec": 0, 00:18:47.264 "w_mbytes_per_sec": 0 00:18:47.264 }, 00:18:47.264 "claimed": true, 00:18:47.264 "claim_type": "exclusive_write", 00:18:47.264 "zoned": false, 00:18:47.264 "supported_io_types": { 00:18:47.264 "read": true, 00:18:47.264 "write": true, 00:18:47.264 "unmap": true, 00:18:47.264 "flush": true, 00:18:47.264 "reset": true, 00:18:47.264 "nvme_admin": false, 00:18:47.264 "nvme_io": false, 00:18:47.264 "nvme_io_md": false, 00:18:47.264 "write_zeroes": true, 00:18:47.264 "zcopy": true, 00:18:47.264 "get_zone_info": false, 00:18:47.264 "zone_management": false, 00:18:47.264 "zone_append": false, 00:18:47.264 "compare": false, 00:18:47.264 "compare_and_write": false, 00:18:47.264 "abort": true, 00:18:47.264 "seek_hole": false, 00:18:47.264 "seek_data": false, 00:18:47.264 "copy": true, 00:18:47.264 "nvme_iov_md": false 00:18:47.264 }, 00:18:47.264 "memory_domains": [ 00:18:47.264 { 00:18:47.264 "dma_device_id": "system", 00:18:47.264 "dma_device_type": 1 00:18:47.264 }, 00:18:47.264 { 00:18:47.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.264 "dma_device_type": 2 00:18:47.264 } 00:18:47.264 ], 00:18:47.264 "driver_specific": {} 00:18:47.264 } 00:18:47.264 ] 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.264 "name": "Existed_Raid", 00:18:47.264 "uuid": "b155a92e-1b89-470e-b4c2-261846e3d3d5", 00:18:47.264 "strip_size_kb": 64, 00:18:47.264 "state": "configuring", 00:18:47.264 "raid_level": "raid5f", 00:18:47.264 "superblock": true, 00:18:47.264 "num_base_bdevs": 4, 00:18:47.264 "num_base_bdevs_discovered": 1, 00:18:47.264 "num_base_bdevs_operational": 4, 00:18:47.264 "base_bdevs_list": [ 00:18:47.264 { 00:18:47.264 "name": "BaseBdev1", 00:18:47.264 "uuid": "4b5fd126-1fa5-4714-941e-53ef35fadbb6", 00:18:47.264 "is_configured": true, 00:18:47.264 "data_offset": 2048, 00:18:47.264 "data_size": 63488 00:18:47.264 }, 00:18:47.264 { 00:18:47.264 "name": "BaseBdev2", 00:18:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.264 "is_configured": false, 00:18:47.264 "data_offset": 0, 00:18:47.264 "data_size": 0 00:18:47.264 }, 00:18:47.264 { 00:18:47.264 "name": "BaseBdev3", 00:18:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.264 "is_configured": false, 00:18:47.264 "data_offset": 0, 00:18:47.264 "data_size": 0 00:18:47.264 }, 00:18:47.264 { 00:18:47.264 "name": "BaseBdev4", 00:18:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.264 "is_configured": false, 00:18:47.264 "data_offset": 0, 00:18:47.264 "data_size": 0 00:18:47.264 } 00:18:47.264 ] 00:18:47.264 }' 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.264 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.523 [2024-10-01 13:52:57.679629] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.523 [2024-10-01 13:52:57.679830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.523 [2024-10-01 13:52:57.691679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.523 [2024-10-01 13:52:57.693946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.523 [2024-10-01 13:52:57.694108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.523 [2024-10-01 13:52:57.694200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.523 [2024-10-01 13:52:57.694249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.523 [2024-10-01 13:52:57.694279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.523 [2024-10-01 13:52:57.694365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.523 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.781 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.781 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.781 "name": "Existed_Raid", 00:18:47.781 "uuid": "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c", 00:18:47.781 "strip_size_kb": 64, 00:18:47.781 "state": "configuring", 00:18:47.781 "raid_level": "raid5f", 00:18:47.781 "superblock": true, 00:18:47.781 "num_base_bdevs": 4, 00:18:47.781 "num_base_bdevs_discovered": 1, 00:18:47.781 "num_base_bdevs_operational": 4, 00:18:47.781 "base_bdevs_list": [ 00:18:47.781 { 00:18:47.781 "name": "BaseBdev1", 00:18:47.781 "uuid": "4b5fd126-1fa5-4714-941e-53ef35fadbb6", 00:18:47.781 "is_configured": true, 00:18:47.781 "data_offset": 2048, 00:18:47.781 "data_size": 63488 00:18:47.781 }, 00:18:47.781 { 00:18:47.781 "name": "BaseBdev2", 00:18:47.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.781 "is_configured": false, 00:18:47.781 "data_offset": 0, 00:18:47.781 "data_size": 0 00:18:47.781 }, 00:18:47.781 { 00:18:47.781 "name": "BaseBdev3", 00:18:47.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.781 "is_configured": false, 00:18:47.781 "data_offset": 0, 00:18:47.781 "data_size": 0 00:18:47.781 }, 00:18:47.781 { 00:18:47.781 "name": "BaseBdev4", 00:18:47.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.781 "is_configured": false, 00:18:47.781 "data_offset": 0, 00:18:47.781 "data_size": 0 00:18:47.781 } 00:18:47.781 ] 00:18:47.781 }' 00:18:47.781 13:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.781 13:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.039 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:48.039 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.039 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.039 [2024-10-01 13:52:58.176781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.039 BaseBdev2 00:18:48.039 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.039 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:48.039 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:48.039 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.040 [ 00:18:48.040 { 00:18:48.040 "name": "BaseBdev2", 00:18:48.040 "aliases": [ 00:18:48.040 "4869394d-77e8-4553-a006-89cf8fde24fd" 00:18:48.040 ], 00:18:48.040 "product_name": "Malloc disk", 00:18:48.040 "block_size": 512, 00:18:48.040 "num_blocks": 65536, 00:18:48.040 "uuid": "4869394d-77e8-4553-a006-89cf8fde24fd", 00:18:48.040 "assigned_rate_limits": { 00:18:48.040 "rw_ios_per_sec": 0, 00:18:48.040 "rw_mbytes_per_sec": 0, 00:18:48.040 "r_mbytes_per_sec": 0, 00:18:48.040 "w_mbytes_per_sec": 0 00:18:48.040 }, 00:18:48.040 "claimed": true, 00:18:48.040 "claim_type": "exclusive_write", 00:18:48.040 "zoned": false, 00:18:48.040 "supported_io_types": { 00:18:48.040 "read": true, 00:18:48.040 "write": true, 00:18:48.040 "unmap": true, 00:18:48.040 "flush": true, 00:18:48.040 "reset": true, 00:18:48.040 "nvme_admin": false, 00:18:48.040 "nvme_io": false, 00:18:48.040 "nvme_io_md": false, 00:18:48.040 "write_zeroes": true, 00:18:48.040 "zcopy": true, 00:18:48.040 "get_zone_info": false, 00:18:48.040 "zone_management": false, 00:18:48.040 "zone_append": false, 00:18:48.040 "compare": false, 00:18:48.040 "compare_and_write": false, 00:18:48.040 "abort": true, 00:18:48.040 "seek_hole": false, 00:18:48.040 "seek_data": false, 00:18:48.040 "copy": true, 00:18:48.040 "nvme_iov_md": false 00:18:48.040 }, 00:18:48.040 "memory_domains": [ 00:18:48.040 { 00:18:48.040 "dma_device_id": "system", 00:18:48.040 "dma_device_type": 1 00:18:48.040 }, 00:18:48.040 { 00:18:48.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.040 "dma_device_type": 2 00:18:48.040 } 00:18:48.040 ], 00:18:48.040 "driver_specific": {} 00:18:48.040 } 00:18:48.040 ] 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.040 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.300 "name": "Existed_Raid", 00:18:48.300 "uuid": "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c", 00:18:48.300 "strip_size_kb": 64, 00:18:48.300 "state": "configuring", 00:18:48.300 "raid_level": "raid5f", 00:18:48.300 "superblock": true, 00:18:48.300 "num_base_bdevs": 4, 00:18:48.300 "num_base_bdevs_discovered": 2, 00:18:48.300 "num_base_bdevs_operational": 4, 00:18:48.300 "base_bdevs_list": [ 00:18:48.300 { 00:18:48.300 "name": "BaseBdev1", 00:18:48.300 "uuid": "4b5fd126-1fa5-4714-941e-53ef35fadbb6", 00:18:48.300 "is_configured": true, 00:18:48.300 "data_offset": 2048, 00:18:48.300 "data_size": 63488 00:18:48.300 }, 00:18:48.300 { 00:18:48.300 "name": "BaseBdev2", 00:18:48.300 "uuid": "4869394d-77e8-4553-a006-89cf8fde24fd", 00:18:48.300 "is_configured": true, 00:18:48.300 "data_offset": 2048, 00:18:48.300 "data_size": 63488 00:18:48.300 }, 00:18:48.300 { 00:18:48.300 "name": "BaseBdev3", 00:18:48.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.300 "is_configured": false, 00:18:48.300 "data_offset": 0, 00:18:48.300 "data_size": 0 00:18:48.300 }, 00:18:48.300 { 00:18:48.300 "name": "BaseBdev4", 00:18:48.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.300 "is_configured": false, 00:18:48.300 "data_offset": 0, 00:18:48.300 "data_size": 0 00:18:48.300 } 00:18:48.300 ] 00:18:48.300 }' 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.300 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.559 BaseBdev3 00:18:48.559 [2024-10-01 13:52:58.745603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.559 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.818 [ 00:18:48.818 { 00:18:48.818 "name": "BaseBdev3", 00:18:48.818 "aliases": [ 00:18:48.818 "3afafe75-cb64-4ddf-a69f-81867a90ff8c" 00:18:48.818 ], 00:18:48.818 "product_name": "Malloc disk", 00:18:48.818 "block_size": 512, 00:18:48.818 "num_blocks": 65536, 00:18:48.818 "uuid": "3afafe75-cb64-4ddf-a69f-81867a90ff8c", 00:18:48.818 "assigned_rate_limits": { 00:18:48.818 "rw_ios_per_sec": 0, 00:18:48.818 "rw_mbytes_per_sec": 0, 00:18:48.818 "r_mbytes_per_sec": 0, 00:18:48.818 "w_mbytes_per_sec": 0 00:18:48.818 }, 00:18:48.818 "claimed": true, 00:18:48.818 "claim_type": "exclusive_write", 00:18:48.818 "zoned": false, 00:18:48.818 "supported_io_types": { 00:18:48.818 "read": true, 00:18:48.818 "write": true, 00:18:48.818 "unmap": true, 00:18:48.818 "flush": true, 00:18:48.818 "reset": true, 00:18:48.818 "nvme_admin": false, 00:18:48.818 "nvme_io": false, 00:18:48.818 "nvme_io_md": false, 00:18:48.818 "write_zeroes": true, 00:18:48.818 "zcopy": true, 00:18:48.818 "get_zone_info": false, 00:18:48.818 "zone_management": false, 00:18:48.818 "zone_append": false, 00:18:48.818 "compare": false, 00:18:48.818 "compare_and_write": false, 00:18:48.818 "abort": true, 00:18:48.818 "seek_hole": false, 00:18:48.818 "seek_data": false, 00:18:48.818 "copy": true, 00:18:48.818 "nvme_iov_md": false 00:18:48.818 }, 00:18:48.818 "memory_domains": [ 00:18:48.818 { 00:18:48.818 "dma_device_id": "system", 00:18:48.818 "dma_device_type": 1 00:18:48.818 }, 00:18:48.818 { 00:18:48.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.818 "dma_device_type": 2 00:18:48.818 } 00:18:48.818 ], 00:18:48.818 "driver_specific": {} 00:18:48.818 } 00:18:48.818 ] 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.818 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.819 "name": "Existed_Raid", 00:18:48.819 "uuid": "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c", 00:18:48.819 "strip_size_kb": 64, 00:18:48.819 "state": "configuring", 00:18:48.819 "raid_level": "raid5f", 00:18:48.819 "superblock": true, 00:18:48.819 "num_base_bdevs": 4, 00:18:48.819 "num_base_bdevs_discovered": 3, 00:18:48.819 "num_base_bdevs_operational": 4, 00:18:48.819 "base_bdevs_list": [ 00:18:48.819 { 00:18:48.819 "name": "BaseBdev1", 00:18:48.819 "uuid": "4b5fd126-1fa5-4714-941e-53ef35fadbb6", 00:18:48.819 "is_configured": true, 00:18:48.819 "data_offset": 2048, 00:18:48.819 "data_size": 63488 00:18:48.819 }, 00:18:48.819 { 00:18:48.819 "name": "BaseBdev2", 00:18:48.819 "uuid": "4869394d-77e8-4553-a006-89cf8fde24fd", 00:18:48.819 "is_configured": true, 00:18:48.819 "data_offset": 2048, 00:18:48.819 "data_size": 63488 00:18:48.819 }, 00:18:48.819 { 00:18:48.819 "name": "BaseBdev3", 00:18:48.819 "uuid": "3afafe75-cb64-4ddf-a69f-81867a90ff8c", 00:18:48.819 "is_configured": true, 00:18:48.819 "data_offset": 2048, 00:18:48.819 "data_size": 63488 00:18:48.819 }, 00:18:48.819 { 00:18:48.819 "name": "BaseBdev4", 00:18:48.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.819 "is_configured": false, 00:18:48.819 "data_offset": 0, 00:18:48.819 "data_size": 0 00:18:48.819 } 00:18:48.819 ] 00:18:48.819 }' 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.819 13:52:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.077 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:49.077 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.077 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.336 BaseBdev4 00:18:49.336 [2024-10-01 13:52:59.285680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:49.336 [2024-10-01 13:52:59.285981] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:49.336 [2024-10-01 13:52:59.285998] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:49.336 [2024-10-01 13:52:59.286282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.336 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.337 [2024-10-01 13:52:59.294384] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:49.337 [2024-10-01 13:52:59.295617] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:49.337 [2024-10-01 13:52:59.296050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.337 [ 00:18:49.337 { 00:18:49.337 "name": "BaseBdev4", 00:18:49.337 "aliases": [ 00:18:49.337 "c238d774-6835-4013-890c-c4fef665511f" 00:18:49.337 ], 00:18:49.337 "product_name": "Malloc disk", 00:18:49.337 "block_size": 512, 00:18:49.337 "num_blocks": 65536, 00:18:49.337 "uuid": "c238d774-6835-4013-890c-c4fef665511f", 00:18:49.337 "assigned_rate_limits": { 00:18:49.337 "rw_ios_per_sec": 0, 00:18:49.337 "rw_mbytes_per_sec": 0, 00:18:49.337 "r_mbytes_per_sec": 0, 00:18:49.337 "w_mbytes_per_sec": 0 00:18:49.337 }, 00:18:49.337 "claimed": true, 00:18:49.337 "claim_type": "exclusive_write", 00:18:49.337 "zoned": false, 00:18:49.337 "supported_io_types": { 00:18:49.337 "read": true, 00:18:49.337 "write": true, 00:18:49.337 "unmap": true, 00:18:49.337 "flush": true, 00:18:49.337 "reset": true, 00:18:49.337 "nvme_admin": false, 00:18:49.337 "nvme_io": false, 00:18:49.337 "nvme_io_md": false, 00:18:49.337 "write_zeroes": true, 00:18:49.337 "zcopy": true, 00:18:49.337 "get_zone_info": false, 00:18:49.337 "zone_management": false, 00:18:49.337 "zone_append": false, 00:18:49.337 "compare": false, 00:18:49.337 "compare_and_write": false, 00:18:49.337 "abort": true, 00:18:49.337 "seek_hole": false, 00:18:49.337 "seek_data": false, 00:18:49.337 "copy": true, 00:18:49.337 "nvme_iov_md": false 00:18:49.337 }, 00:18:49.337 "memory_domains": [ 00:18:49.337 { 00:18:49.337 "dma_device_id": "system", 00:18:49.337 "dma_device_type": 1 00:18:49.337 }, 00:18:49.337 { 00:18:49.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.337 "dma_device_type": 2 00:18:49.337 } 00:18:49.337 ], 00:18:49.337 "driver_specific": {} 00:18:49.337 } 00:18:49.337 ] 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.337 "name": "Existed_Raid", 00:18:49.337 "uuid": "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c", 00:18:49.337 "strip_size_kb": 64, 00:18:49.337 "state": "online", 00:18:49.337 "raid_level": "raid5f", 00:18:49.337 "superblock": true, 00:18:49.337 "num_base_bdevs": 4, 00:18:49.337 "num_base_bdevs_discovered": 4, 00:18:49.337 "num_base_bdevs_operational": 4, 00:18:49.337 "base_bdevs_list": [ 00:18:49.337 { 00:18:49.337 "name": "BaseBdev1", 00:18:49.337 "uuid": "4b5fd126-1fa5-4714-941e-53ef35fadbb6", 00:18:49.337 "is_configured": true, 00:18:49.337 "data_offset": 2048, 00:18:49.337 "data_size": 63488 00:18:49.337 }, 00:18:49.337 { 00:18:49.337 "name": "BaseBdev2", 00:18:49.337 "uuid": "4869394d-77e8-4553-a006-89cf8fde24fd", 00:18:49.337 "is_configured": true, 00:18:49.337 "data_offset": 2048, 00:18:49.337 "data_size": 63488 00:18:49.337 }, 00:18:49.337 { 00:18:49.337 "name": "BaseBdev3", 00:18:49.337 "uuid": "3afafe75-cb64-4ddf-a69f-81867a90ff8c", 00:18:49.337 "is_configured": true, 00:18:49.337 "data_offset": 2048, 00:18:49.337 "data_size": 63488 00:18:49.337 }, 00:18:49.337 { 00:18:49.337 "name": "BaseBdev4", 00:18:49.337 "uuid": "c238d774-6835-4013-890c-c4fef665511f", 00:18:49.337 "is_configured": true, 00:18:49.337 "data_offset": 2048, 00:18:49.337 "data_size": 63488 00:18:49.337 } 00:18:49.337 ] 00:18:49.337 }' 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.337 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.906 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:49.906 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:49.906 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:49.906 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:49.906 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.907 [2024-10-01 13:52:59.808094] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.907 "name": "Existed_Raid", 00:18:49.907 "aliases": [ 00:18:49.907 "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c" 00:18:49.907 ], 00:18:49.907 "product_name": "Raid Volume", 00:18:49.907 "block_size": 512, 00:18:49.907 "num_blocks": 190464, 00:18:49.907 "uuid": "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c", 00:18:49.907 "assigned_rate_limits": { 00:18:49.907 "rw_ios_per_sec": 0, 00:18:49.907 "rw_mbytes_per_sec": 0, 00:18:49.907 "r_mbytes_per_sec": 0, 00:18:49.907 "w_mbytes_per_sec": 0 00:18:49.907 }, 00:18:49.907 "claimed": false, 00:18:49.907 "zoned": false, 00:18:49.907 "supported_io_types": { 00:18:49.907 "read": true, 00:18:49.907 "write": true, 00:18:49.907 "unmap": false, 00:18:49.907 "flush": false, 00:18:49.907 "reset": true, 00:18:49.907 "nvme_admin": false, 00:18:49.907 "nvme_io": false, 00:18:49.907 "nvme_io_md": false, 00:18:49.907 "write_zeroes": true, 00:18:49.907 "zcopy": false, 00:18:49.907 "get_zone_info": false, 00:18:49.907 "zone_management": false, 00:18:49.907 "zone_append": false, 00:18:49.907 "compare": false, 00:18:49.907 "compare_and_write": false, 00:18:49.907 "abort": false, 00:18:49.907 "seek_hole": false, 00:18:49.907 "seek_data": false, 00:18:49.907 "copy": false, 00:18:49.907 "nvme_iov_md": false 00:18:49.907 }, 00:18:49.907 "driver_specific": { 00:18:49.907 "raid": { 00:18:49.907 "uuid": "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c", 00:18:49.907 "strip_size_kb": 64, 00:18:49.907 "state": "online", 00:18:49.907 "raid_level": "raid5f", 00:18:49.907 "superblock": true, 00:18:49.907 "num_base_bdevs": 4, 00:18:49.907 "num_base_bdevs_discovered": 4, 00:18:49.907 "num_base_bdevs_operational": 4, 00:18:49.907 "base_bdevs_list": [ 00:18:49.907 { 00:18:49.907 "name": "BaseBdev1", 00:18:49.907 "uuid": "4b5fd126-1fa5-4714-941e-53ef35fadbb6", 00:18:49.907 "is_configured": true, 00:18:49.907 "data_offset": 2048, 00:18:49.907 "data_size": 63488 00:18:49.907 }, 00:18:49.907 { 00:18:49.907 "name": "BaseBdev2", 00:18:49.907 "uuid": "4869394d-77e8-4553-a006-89cf8fde24fd", 00:18:49.907 "is_configured": true, 00:18:49.907 "data_offset": 2048, 00:18:49.907 "data_size": 63488 00:18:49.907 }, 00:18:49.907 { 00:18:49.907 "name": "BaseBdev3", 00:18:49.907 "uuid": "3afafe75-cb64-4ddf-a69f-81867a90ff8c", 00:18:49.907 "is_configured": true, 00:18:49.907 "data_offset": 2048, 00:18:49.907 "data_size": 63488 00:18:49.907 }, 00:18:49.907 { 00:18:49.907 "name": "BaseBdev4", 00:18:49.907 "uuid": "c238d774-6835-4013-890c-c4fef665511f", 00:18:49.907 "is_configured": true, 00:18:49.907 "data_offset": 2048, 00:18:49.907 "data_size": 63488 00:18:49.907 } 00:18:49.907 ] 00:18:49.907 } 00:18:49.907 } 00:18:49.907 }' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:49.907 BaseBdev2 00:18:49.907 BaseBdev3 00:18:49.907 BaseBdev4' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.907 13:52:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.907 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.166 [2024-10-01 13:53:00.147741] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.166 "name": "Existed_Raid", 00:18:50.166 "uuid": "00c0ecf5-d2b9-4eb7-aa27-fd1152ec623c", 00:18:50.166 "strip_size_kb": 64, 00:18:50.166 "state": "online", 00:18:50.166 "raid_level": "raid5f", 00:18:50.166 "superblock": true, 00:18:50.166 "num_base_bdevs": 4, 00:18:50.166 "num_base_bdevs_discovered": 3, 00:18:50.166 "num_base_bdevs_operational": 3, 00:18:50.166 "base_bdevs_list": [ 00:18:50.166 { 00:18:50.166 "name": null, 00:18:50.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.166 "is_configured": false, 00:18:50.166 "data_offset": 0, 00:18:50.166 "data_size": 63488 00:18:50.166 }, 00:18:50.166 { 00:18:50.166 "name": "BaseBdev2", 00:18:50.166 "uuid": "4869394d-77e8-4553-a006-89cf8fde24fd", 00:18:50.166 "is_configured": true, 00:18:50.166 "data_offset": 2048, 00:18:50.166 "data_size": 63488 00:18:50.166 }, 00:18:50.166 { 00:18:50.166 "name": "BaseBdev3", 00:18:50.166 "uuid": "3afafe75-cb64-4ddf-a69f-81867a90ff8c", 00:18:50.166 "is_configured": true, 00:18:50.166 "data_offset": 2048, 00:18:50.166 "data_size": 63488 00:18:50.166 }, 00:18:50.166 { 00:18:50.166 "name": "BaseBdev4", 00:18:50.166 "uuid": "c238d774-6835-4013-890c-c4fef665511f", 00:18:50.166 "is_configured": true, 00:18:50.166 "data_offset": 2048, 00:18:50.166 "data_size": 63488 00:18:50.166 } 00:18:50.166 ] 00:18:50.166 }' 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.166 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.731 [2024-10-01 13:53:00.800777] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:50.731 [2024-10-01 13:53:00.801116] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.731 [2024-10-01 13:53:00.898844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:50.731 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.990 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.990 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:50.990 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.990 13:53:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:50.990 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.990 13:53:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.990 [2024-10-01 13:53:00.954830] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.990 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.990 [2024-10-01 13:53:01.132343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:50.990 [2024-10-01 13:53:01.132607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.250 BaseBdev2 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.250 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.250 [ 00:18:51.250 { 00:18:51.250 "name": "BaseBdev2", 00:18:51.250 "aliases": [ 00:18:51.250 "e96f4701-4c1b-4abf-a9af-d0b713ec8a73" 00:18:51.250 ], 00:18:51.250 "product_name": "Malloc disk", 00:18:51.250 "block_size": 512, 00:18:51.250 "num_blocks": 65536, 00:18:51.250 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:51.250 "assigned_rate_limits": { 00:18:51.250 "rw_ios_per_sec": 0, 00:18:51.250 "rw_mbytes_per_sec": 0, 00:18:51.250 "r_mbytes_per_sec": 0, 00:18:51.250 "w_mbytes_per_sec": 0 00:18:51.250 }, 00:18:51.250 "claimed": false, 00:18:51.250 "zoned": false, 00:18:51.250 "supported_io_types": { 00:18:51.250 "read": true, 00:18:51.250 "write": true, 00:18:51.250 "unmap": true, 00:18:51.250 "flush": true, 00:18:51.250 "reset": true, 00:18:51.250 "nvme_admin": false, 00:18:51.250 "nvme_io": false, 00:18:51.250 "nvme_io_md": false, 00:18:51.250 "write_zeroes": true, 00:18:51.250 "zcopy": true, 00:18:51.250 "get_zone_info": false, 00:18:51.250 "zone_management": false, 00:18:51.250 "zone_append": false, 00:18:51.250 "compare": false, 00:18:51.250 "compare_and_write": false, 00:18:51.250 "abort": true, 00:18:51.250 "seek_hole": false, 00:18:51.250 "seek_data": false, 00:18:51.250 "copy": true, 00:18:51.250 "nvme_iov_md": false 00:18:51.250 }, 00:18:51.250 "memory_domains": [ 00:18:51.250 { 00:18:51.250 "dma_device_id": "system", 00:18:51.250 "dma_device_type": 1 00:18:51.250 }, 00:18:51.250 { 00:18:51.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.251 "dma_device_type": 2 00:18:51.251 } 00:18:51.251 ], 00:18:51.251 "driver_specific": {} 00:18:51.251 } 00:18:51.251 ] 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.251 BaseBdev3 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.251 [ 00:18:51.251 { 00:18:51.251 "name": "BaseBdev3", 00:18:51.251 "aliases": [ 00:18:51.251 "b05ef6d6-052d-4d73-9108-a214a45dd29b" 00:18:51.251 ], 00:18:51.251 "product_name": "Malloc disk", 00:18:51.251 "block_size": 512, 00:18:51.251 "num_blocks": 65536, 00:18:51.251 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:51.251 "assigned_rate_limits": { 00:18:51.251 "rw_ios_per_sec": 0, 00:18:51.251 "rw_mbytes_per_sec": 0, 00:18:51.251 "r_mbytes_per_sec": 0, 00:18:51.251 "w_mbytes_per_sec": 0 00:18:51.251 }, 00:18:51.251 "claimed": false, 00:18:51.251 "zoned": false, 00:18:51.251 "supported_io_types": { 00:18:51.251 "read": true, 00:18:51.251 "write": true, 00:18:51.251 "unmap": true, 00:18:51.251 "flush": true, 00:18:51.251 "reset": true, 00:18:51.251 "nvme_admin": false, 00:18:51.251 "nvme_io": false, 00:18:51.251 "nvme_io_md": false, 00:18:51.251 "write_zeroes": true, 00:18:51.251 "zcopy": true, 00:18:51.251 "get_zone_info": false, 00:18:51.251 "zone_management": false, 00:18:51.251 "zone_append": false, 00:18:51.251 "compare": false, 00:18:51.251 "compare_and_write": false, 00:18:51.251 "abort": true, 00:18:51.251 "seek_hole": false, 00:18:51.251 "seek_data": false, 00:18:51.251 "copy": true, 00:18:51.251 "nvme_iov_md": false 00:18:51.251 }, 00:18:51.251 "memory_domains": [ 00:18:51.251 { 00:18:51.251 "dma_device_id": "system", 00:18:51.251 "dma_device_type": 1 00:18:51.251 }, 00:18:51.251 { 00:18:51.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.251 "dma_device_type": 2 00:18:51.251 } 00:18:51.251 ], 00:18:51.251 "driver_specific": {} 00:18:51.251 } 00:18:51.251 ] 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.251 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 BaseBdev4 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 [ 00:18:51.511 { 00:18:51.511 "name": "BaseBdev4", 00:18:51.511 "aliases": [ 00:18:51.511 "1a2c56de-ffb6-4a08-97b7-e70f735e1490" 00:18:51.511 ], 00:18:51.511 "product_name": "Malloc disk", 00:18:51.511 "block_size": 512, 00:18:51.511 "num_blocks": 65536, 00:18:51.511 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:51.511 "assigned_rate_limits": { 00:18:51.511 "rw_ios_per_sec": 0, 00:18:51.511 "rw_mbytes_per_sec": 0, 00:18:51.511 "r_mbytes_per_sec": 0, 00:18:51.511 "w_mbytes_per_sec": 0 00:18:51.511 }, 00:18:51.511 "claimed": false, 00:18:51.511 "zoned": false, 00:18:51.511 "supported_io_types": { 00:18:51.511 "read": true, 00:18:51.511 "write": true, 00:18:51.511 "unmap": true, 00:18:51.511 "flush": true, 00:18:51.511 "reset": true, 00:18:51.511 "nvme_admin": false, 00:18:51.511 "nvme_io": false, 00:18:51.511 "nvme_io_md": false, 00:18:51.511 "write_zeroes": true, 00:18:51.511 "zcopy": true, 00:18:51.511 "get_zone_info": false, 00:18:51.511 "zone_management": false, 00:18:51.511 "zone_append": false, 00:18:51.511 "compare": false, 00:18:51.511 "compare_and_write": false, 00:18:51.511 "abort": true, 00:18:51.511 "seek_hole": false, 00:18:51.511 "seek_data": false, 00:18:51.511 "copy": true, 00:18:51.511 "nvme_iov_md": false 00:18:51.511 }, 00:18:51.511 "memory_domains": [ 00:18:51.511 { 00:18:51.511 "dma_device_id": "system", 00:18:51.511 "dma_device_type": 1 00:18:51.511 }, 00:18:51.511 { 00:18:51.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.511 "dma_device_type": 2 00:18:51.511 } 00:18:51.511 ], 00:18:51.511 "driver_specific": {} 00:18:51.511 } 00:18:51.511 ] 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 [2024-10-01 13:53:01.511363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.511 [2024-10-01 13:53:01.511563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.511 [2024-10-01 13:53:01.511628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.511 [2024-10-01 13:53:01.513909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:51.511 [2024-10-01 13:53:01.513967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.511 "name": "Existed_Raid", 00:18:51.511 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:51.511 "strip_size_kb": 64, 00:18:51.511 "state": "configuring", 00:18:51.511 "raid_level": "raid5f", 00:18:51.511 "superblock": true, 00:18:51.511 "num_base_bdevs": 4, 00:18:51.511 "num_base_bdevs_discovered": 3, 00:18:51.511 "num_base_bdevs_operational": 4, 00:18:51.511 "base_bdevs_list": [ 00:18:51.511 { 00:18:51.511 "name": "BaseBdev1", 00:18:51.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.511 "is_configured": false, 00:18:51.511 "data_offset": 0, 00:18:51.511 "data_size": 0 00:18:51.511 }, 00:18:51.511 { 00:18:51.511 "name": "BaseBdev2", 00:18:51.511 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:51.511 "is_configured": true, 00:18:51.511 "data_offset": 2048, 00:18:51.511 "data_size": 63488 00:18:51.511 }, 00:18:51.511 { 00:18:51.511 "name": "BaseBdev3", 00:18:51.511 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:51.511 "is_configured": true, 00:18:51.511 "data_offset": 2048, 00:18:51.511 "data_size": 63488 00:18:51.511 }, 00:18:51.511 { 00:18:51.511 "name": "BaseBdev4", 00:18:51.512 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:51.512 "is_configured": true, 00:18:51.512 "data_offset": 2048, 00:18:51.512 "data_size": 63488 00:18:51.512 } 00:18:51.512 ] 00:18:51.512 }' 00:18:51.512 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.512 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.799 [2024-10-01 13:53:01.935051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.799 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.057 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.057 "name": "Existed_Raid", 00:18:52.057 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:52.057 "strip_size_kb": 64, 00:18:52.057 "state": "configuring", 00:18:52.057 "raid_level": "raid5f", 00:18:52.057 "superblock": true, 00:18:52.057 "num_base_bdevs": 4, 00:18:52.057 "num_base_bdevs_discovered": 2, 00:18:52.057 "num_base_bdevs_operational": 4, 00:18:52.057 "base_bdevs_list": [ 00:18:52.057 { 00:18:52.057 "name": "BaseBdev1", 00:18:52.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.057 "is_configured": false, 00:18:52.057 "data_offset": 0, 00:18:52.057 "data_size": 0 00:18:52.057 }, 00:18:52.057 { 00:18:52.057 "name": null, 00:18:52.057 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:52.057 "is_configured": false, 00:18:52.057 "data_offset": 0, 00:18:52.057 "data_size": 63488 00:18:52.057 }, 00:18:52.057 { 00:18:52.057 "name": "BaseBdev3", 00:18:52.057 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:52.057 "is_configured": true, 00:18:52.057 "data_offset": 2048, 00:18:52.057 "data_size": 63488 00:18:52.057 }, 00:18:52.057 { 00:18:52.057 "name": "BaseBdev4", 00:18:52.057 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:52.057 "is_configured": true, 00:18:52.057 "data_offset": 2048, 00:18:52.057 "data_size": 63488 00:18:52.057 } 00:18:52.057 ] 00:18:52.057 }' 00:18:52.057 13:53:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.057 13:53:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.315 [2024-10-01 13:53:02.451387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.315 BaseBdev1 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.315 [ 00:18:52.315 { 00:18:52.315 "name": "BaseBdev1", 00:18:52.315 "aliases": [ 00:18:52.315 "007aebef-e8c4-4f1c-b770-4d3982125dc1" 00:18:52.315 ], 00:18:52.315 "product_name": "Malloc disk", 00:18:52.315 "block_size": 512, 00:18:52.315 "num_blocks": 65536, 00:18:52.315 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:52.315 "assigned_rate_limits": { 00:18:52.315 "rw_ios_per_sec": 0, 00:18:52.315 "rw_mbytes_per_sec": 0, 00:18:52.315 "r_mbytes_per_sec": 0, 00:18:52.315 "w_mbytes_per_sec": 0 00:18:52.315 }, 00:18:52.315 "claimed": true, 00:18:52.315 "claim_type": "exclusive_write", 00:18:52.315 "zoned": false, 00:18:52.315 "supported_io_types": { 00:18:52.315 "read": true, 00:18:52.315 "write": true, 00:18:52.315 "unmap": true, 00:18:52.315 "flush": true, 00:18:52.315 "reset": true, 00:18:52.315 "nvme_admin": false, 00:18:52.315 "nvme_io": false, 00:18:52.315 "nvme_io_md": false, 00:18:52.315 "write_zeroes": true, 00:18:52.315 "zcopy": true, 00:18:52.315 "get_zone_info": false, 00:18:52.315 "zone_management": false, 00:18:52.315 "zone_append": false, 00:18:52.315 "compare": false, 00:18:52.315 "compare_and_write": false, 00:18:52.315 "abort": true, 00:18:52.315 "seek_hole": false, 00:18:52.315 "seek_data": false, 00:18:52.315 "copy": true, 00:18:52.315 "nvme_iov_md": false 00:18:52.315 }, 00:18:52.315 "memory_domains": [ 00:18:52.315 { 00:18:52.315 "dma_device_id": "system", 00:18:52.315 "dma_device_type": 1 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.315 "dma_device_type": 2 00:18:52.315 } 00:18:52.315 ], 00:18:52.315 "driver_specific": {} 00:18:52.315 } 00:18:52.315 ] 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.315 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.574 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.575 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.575 "name": "Existed_Raid", 00:18:52.575 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:52.575 "strip_size_kb": 64, 00:18:52.575 "state": "configuring", 00:18:52.575 "raid_level": "raid5f", 00:18:52.575 "superblock": true, 00:18:52.575 "num_base_bdevs": 4, 00:18:52.575 "num_base_bdevs_discovered": 3, 00:18:52.575 "num_base_bdevs_operational": 4, 00:18:52.575 "base_bdevs_list": [ 00:18:52.575 { 00:18:52.575 "name": "BaseBdev1", 00:18:52.575 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:52.575 "is_configured": true, 00:18:52.575 "data_offset": 2048, 00:18:52.575 "data_size": 63488 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "name": null, 00:18:52.575 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:52.575 "is_configured": false, 00:18:52.575 "data_offset": 0, 00:18:52.575 "data_size": 63488 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "name": "BaseBdev3", 00:18:52.575 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:52.575 "is_configured": true, 00:18:52.575 "data_offset": 2048, 00:18:52.575 "data_size": 63488 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "name": "BaseBdev4", 00:18:52.575 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:52.575 "is_configured": true, 00:18:52.575 "data_offset": 2048, 00:18:52.575 "data_size": 63488 00:18:52.575 } 00:18:52.575 ] 00:18:52.575 }' 00:18:52.575 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.575 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.833 [2024-10-01 13:53:02.974732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.833 13:53:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.833 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.092 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.092 "name": "Existed_Raid", 00:18:53.092 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:53.092 "strip_size_kb": 64, 00:18:53.092 "state": "configuring", 00:18:53.092 "raid_level": "raid5f", 00:18:53.092 "superblock": true, 00:18:53.092 "num_base_bdevs": 4, 00:18:53.092 "num_base_bdevs_discovered": 2, 00:18:53.092 "num_base_bdevs_operational": 4, 00:18:53.092 "base_bdevs_list": [ 00:18:53.092 { 00:18:53.092 "name": "BaseBdev1", 00:18:53.092 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:53.092 "is_configured": true, 00:18:53.092 "data_offset": 2048, 00:18:53.092 "data_size": 63488 00:18:53.092 }, 00:18:53.092 { 00:18:53.092 "name": null, 00:18:53.092 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:53.092 "is_configured": false, 00:18:53.092 "data_offset": 0, 00:18:53.092 "data_size": 63488 00:18:53.092 }, 00:18:53.092 { 00:18:53.092 "name": null, 00:18:53.092 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:53.092 "is_configured": false, 00:18:53.092 "data_offset": 0, 00:18:53.092 "data_size": 63488 00:18:53.092 }, 00:18:53.092 { 00:18:53.092 "name": "BaseBdev4", 00:18:53.092 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:53.092 "is_configured": true, 00:18:53.092 "data_offset": 2048, 00:18:53.092 "data_size": 63488 00:18:53.092 } 00:18:53.092 ] 00:18:53.092 }' 00:18:53.092 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.092 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.351 [2024-10-01 13:53:03.470599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.351 "name": "Existed_Raid", 00:18:53.351 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:53.351 "strip_size_kb": 64, 00:18:53.351 "state": "configuring", 00:18:53.351 "raid_level": "raid5f", 00:18:53.351 "superblock": true, 00:18:53.351 "num_base_bdevs": 4, 00:18:53.351 "num_base_bdevs_discovered": 3, 00:18:53.351 "num_base_bdevs_operational": 4, 00:18:53.351 "base_bdevs_list": [ 00:18:53.351 { 00:18:53.351 "name": "BaseBdev1", 00:18:53.351 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:53.351 "is_configured": true, 00:18:53.351 "data_offset": 2048, 00:18:53.351 "data_size": 63488 00:18:53.351 }, 00:18:53.351 { 00:18:53.351 "name": null, 00:18:53.351 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:53.351 "is_configured": false, 00:18:53.351 "data_offset": 0, 00:18:53.351 "data_size": 63488 00:18:53.351 }, 00:18:53.351 { 00:18:53.351 "name": "BaseBdev3", 00:18:53.351 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:53.351 "is_configured": true, 00:18:53.351 "data_offset": 2048, 00:18:53.351 "data_size": 63488 00:18:53.351 }, 00:18:53.351 { 00:18:53.351 "name": "BaseBdev4", 00:18:53.351 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:53.351 "is_configured": true, 00:18:53.351 "data_offset": 2048, 00:18:53.351 "data_size": 63488 00:18:53.351 } 00:18:53.351 ] 00:18:53.351 }' 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.351 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.919 13:53:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.919 [2024-10-01 13:53:03.974021] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.919 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.178 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.178 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.178 "name": "Existed_Raid", 00:18:54.178 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:54.178 "strip_size_kb": 64, 00:18:54.178 "state": "configuring", 00:18:54.178 "raid_level": "raid5f", 00:18:54.178 "superblock": true, 00:18:54.178 "num_base_bdevs": 4, 00:18:54.178 "num_base_bdevs_discovered": 2, 00:18:54.178 "num_base_bdevs_operational": 4, 00:18:54.178 "base_bdevs_list": [ 00:18:54.178 { 00:18:54.178 "name": null, 00:18:54.178 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:54.178 "is_configured": false, 00:18:54.178 "data_offset": 0, 00:18:54.178 "data_size": 63488 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "name": null, 00:18:54.178 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:54.178 "is_configured": false, 00:18:54.178 "data_offset": 0, 00:18:54.178 "data_size": 63488 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "name": "BaseBdev3", 00:18:54.178 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:54.178 "is_configured": true, 00:18:54.178 "data_offset": 2048, 00:18:54.178 "data_size": 63488 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "name": "BaseBdev4", 00:18:54.178 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:54.178 "is_configured": true, 00:18:54.178 "data_offset": 2048, 00:18:54.178 "data_size": 63488 00:18:54.178 } 00:18:54.178 ] 00:18:54.178 }' 00:18:54.178 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.178 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.438 [2024-10-01 13:53:04.574700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.438 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.438 "name": "Existed_Raid", 00:18:54.438 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:54.438 "strip_size_kb": 64, 00:18:54.438 "state": "configuring", 00:18:54.438 "raid_level": "raid5f", 00:18:54.438 "superblock": true, 00:18:54.438 "num_base_bdevs": 4, 00:18:54.438 "num_base_bdevs_discovered": 3, 00:18:54.439 "num_base_bdevs_operational": 4, 00:18:54.439 "base_bdevs_list": [ 00:18:54.439 { 00:18:54.439 "name": null, 00:18:54.439 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:54.439 "is_configured": false, 00:18:54.439 "data_offset": 0, 00:18:54.439 "data_size": 63488 00:18:54.439 }, 00:18:54.439 { 00:18:54.439 "name": "BaseBdev2", 00:18:54.439 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:54.439 "is_configured": true, 00:18:54.439 "data_offset": 2048, 00:18:54.439 "data_size": 63488 00:18:54.439 }, 00:18:54.439 { 00:18:54.439 "name": "BaseBdev3", 00:18:54.439 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:54.439 "is_configured": true, 00:18:54.439 "data_offset": 2048, 00:18:54.439 "data_size": 63488 00:18:54.439 }, 00:18:54.439 { 00:18:54.439 "name": "BaseBdev4", 00:18:54.439 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:54.439 "is_configured": true, 00:18:54.439 "data_offset": 2048, 00:18:54.439 "data_size": 63488 00:18:54.439 } 00:18:54.439 ] 00:18:54.439 }' 00:18:54.697 13:53:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.697 13:53:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 007aebef-e8c4-4f1c-b770-4d3982125dc1 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.956 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.215 [2024-10-01 13:53:05.150292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:55.215 [2024-10-01 13:53:05.150967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:55.215 [2024-10-01 13:53:05.150996] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:55.215 [2024-10-01 13:53:05.151322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:55.215 NewBaseBdev 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.215 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.216 [2024-10-01 13:53:05.158984] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:55.216 [2024-10-01 13:53:05.159021] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:55.216 [2024-10-01 13:53:05.159362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.216 [ 00:18:55.216 { 00:18:55.216 "name": "NewBaseBdev", 00:18:55.216 "aliases": [ 00:18:55.216 "007aebef-e8c4-4f1c-b770-4d3982125dc1" 00:18:55.216 ], 00:18:55.216 "product_name": "Malloc disk", 00:18:55.216 "block_size": 512, 00:18:55.216 "num_blocks": 65536, 00:18:55.216 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:55.216 "assigned_rate_limits": { 00:18:55.216 "rw_ios_per_sec": 0, 00:18:55.216 "rw_mbytes_per_sec": 0, 00:18:55.216 "r_mbytes_per_sec": 0, 00:18:55.216 "w_mbytes_per_sec": 0 00:18:55.216 }, 00:18:55.216 "claimed": true, 00:18:55.216 "claim_type": "exclusive_write", 00:18:55.216 "zoned": false, 00:18:55.216 "supported_io_types": { 00:18:55.216 "read": true, 00:18:55.216 "write": true, 00:18:55.216 "unmap": true, 00:18:55.216 "flush": true, 00:18:55.216 "reset": true, 00:18:55.216 "nvme_admin": false, 00:18:55.216 "nvme_io": false, 00:18:55.216 "nvme_io_md": false, 00:18:55.216 "write_zeroes": true, 00:18:55.216 "zcopy": true, 00:18:55.216 "get_zone_info": false, 00:18:55.216 "zone_management": false, 00:18:55.216 "zone_append": false, 00:18:55.216 "compare": false, 00:18:55.216 "compare_and_write": false, 00:18:55.216 "abort": true, 00:18:55.216 "seek_hole": false, 00:18:55.216 "seek_data": false, 00:18:55.216 "copy": true, 00:18:55.216 "nvme_iov_md": false 00:18:55.216 }, 00:18:55.216 "memory_domains": [ 00:18:55.216 { 00:18:55.216 "dma_device_id": "system", 00:18:55.216 "dma_device_type": 1 00:18:55.216 }, 00:18:55.216 { 00:18:55.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.216 "dma_device_type": 2 00:18:55.216 } 00:18:55.216 ], 00:18:55.216 "driver_specific": {} 00:18:55.216 } 00:18:55.216 ] 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.216 "name": "Existed_Raid", 00:18:55.216 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:55.216 "strip_size_kb": 64, 00:18:55.216 "state": "online", 00:18:55.216 "raid_level": "raid5f", 00:18:55.216 "superblock": true, 00:18:55.216 "num_base_bdevs": 4, 00:18:55.216 "num_base_bdevs_discovered": 4, 00:18:55.216 "num_base_bdevs_operational": 4, 00:18:55.216 "base_bdevs_list": [ 00:18:55.216 { 00:18:55.216 "name": "NewBaseBdev", 00:18:55.216 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:55.216 "is_configured": true, 00:18:55.216 "data_offset": 2048, 00:18:55.216 "data_size": 63488 00:18:55.216 }, 00:18:55.216 { 00:18:55.216 "name": "BaseBdev2", 00:18:55.216 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:55.216 "is_configured": true, 00:18:55.216 "data_offset": 2048, 00:18:55.216 "data_size": 63488 00:18:55.216 }, 00:18:55.216 { 00:18:55.216 "name": "BaseBdev3", 00:18:55.216 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:55.216 "is_configured": true, 00:18:55.216 "data_offset": 2048, 00:18:55.216 "data_size": 63488 00:18:55.216 }, 00:18:55.216 { 00:18:55.216 "name": "BaseBdev4", 00:18:55.216 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:55.216 "is_configured": true, 00:18:55.216 "data_offset": 2048, 00:18:55.216 "data_size": 63488 00:18:55.216 } 00:18:55.216 ] 00:18:55.216 }' 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.216 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.476 [2024-10-01 13:53:05.620900] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.476 "name": "Existed_Raid", 00:18:55.476 "aliases": [ 00:18:55.476 "5f3c80b3-1f62-4685-80ab-ea798a446fde" 00:18:55.476 ], 00:18:55.476 "product_name": "Raid Volume", 00:18:55.476 "block_size": 512, 00:18:55.476 "num_blocks": 190464, 00:18:55.476 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:55.476 "assigned_rate_limits": { 00:18:55.476 "rw_ios_per_sec": 0, 00:18:55.476 "rw_mbytes_per_sec": 0, 00:18:55.476 "r_mbytes_per_sec": 0, 00:18:55.476 "w_mbytes_per_sec": 0 00:18:55.476 }, 00:18:55.476 "claimed": false, 00:18:55.476 "zoned": false, 00:18:55.476 "supported_io_types": { 00:18:55.476 "read": true, 00:18:55.476 "write": true, 00:18:55.476 "unmap": false, 00:18:55.476 "flush": false, 00:18:55.476 "reset": true, 00:18:55.476 "nvme_admin": false, 00:18:55.476 "nvme_io": false, 00:18:55.476 "nvme_io_md": false, 00:18:55.476 "write_zeroes": true, 00:18:55.476 "zcopy": false, 00:18:55.476 "get_zone_info": false, 00:18:55.476 "zone_management": false, 00:18:55.476 "zone_append": false, 00:18:55.476 "compare": false, 00:18:55.476 "compare_and_write": false, 00:18:55.476 "abort": false, 00:18:55.476 "seek_hole": false, 00:18:55.476 "seek_data": false, 00:18:55.476 "copy": false, 00:18:55.476 "nvme_iov_md": false 00:18:55.476 }, 00:18:55.476 "driver_specific": { 00:18:55.476 "raid": { 00:18:55.476 "uuid": "5f3c80b3-1f62-4685-80ab-ea798a446fde", 00:18:55.476 "strip_size_kb": 64, 00:18:55.476 "state": "online", 00:18:55.476 "raid_level": "raid5f", 00:18:55.476 "superblock": true, 00:18:55.476 "num_base_bdevs": 4, 00:18:55.476 "num_base_bdevs_discovered": 4, 00:18:55.476 "num_base_bdevs_operational": 4, 00:18:55.476 "base_bdevs_list": [ 00:18:55.476 { 00:18:55.476 "name": "NewBaseBdev", 00:18:55.476 "uuid": "007aebef-e8c4-4f1c-b770-4d3982125dc1", 00:18:55.476 "is_configured": true, 00:18:55.476 "data_offset": 2048, 00:18:55.476 "data_size": 63488 00:18:55.476 }, 00:18:55.476 { 00:18:55.476 "name": "BaseBdev2", 00:18:55.476 "uuid": "e96f4701-4c1b-4abf-a9af-d0b713ec8a73", 00:18:55.476 "is_configured": true, 00:18:55.476 "data_offset": 2048, 00:18:55.476 "data_size": 63488 00:18:55.476 }, 00:18:55.476 { 00:18:55.476 "name": "BaseBdev3", 00:18:55.476 "uuid": "b05ef6d6-052d-4d73-9108-a214a45dd29b", 00:18:55.476 "is_configured": true, 00:18:55.476 "data_offset": 2048, 00:18:55.476 "data_size": 63488 00:18:55.476 }, 00:18:55.476 { 00:18:55.476 "name": "BaseBdev4", 00:18:55.476 "uuid": "1a2c56de-ffb6-4a08-97b7-e70f735e1490", 00:18:55.476 "is_configured": true, 00:18:55.476 "data_offset": 2048, 00:18:55.476 "data_size": 63488 00:18:55.476 } 00:18:55.476 ] 00:18:55.476 } 00:18:55.476 } 00:18:55.476 }' 00:18:55.476 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:55.737 BaseBdev2 00:18:55.737 BaseBdev3 00:18:55.737 BaseBdev4' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.737 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.738 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.738 [2024-10-01 13:53:05.924620] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.738 [2024-10-01 13:53:05.924686] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.738 [2024-10-01 13:53:05.924823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.738 [2024-10-01 13:53:05.925187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.738 [2024-10-01 13:53:05.925215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83485 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83485 ']' 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83485 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83485 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.001 killing process with pid 83485 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83485' 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83485 00:18:56.001 [2024-10-01 13:53:05.974133] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.001 13:53:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83485 00:18:56.260 [2024-10-01 13:53:06.423666] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.166 13:53:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:58.166 00:18:58.166 real 0m12.171s 00:18:58.166 user 0m19.047s 00:18:58.166 sys 0m2.499s 00:18:58.166 13:53:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.166 13:53:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 ************************************ 00:18:58.166 END TEST raid5f_state_function_test_sb 00:18:58.166 ************************************ 00:18:58.166 13:53:07 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:58.166 13:53:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:58.166 13:53:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.166 13:53:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 ************************************ 00:18:58.166 START TEST raid5f_superblock_test 00:18:58.166 ************************************ 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84161 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84161 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84161 ']' 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.166 13:53:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 [2024-10-01 13:53:08.039193] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:18:58.166 [2024-10-01 13:53:08.039337] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84161 ] 00:18:58.166 [2024-10-01 13:53:08.198795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.425 [2024-10-01 13:53:08.475285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.684 [2024-10-01 13:53:08.719905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.684 [2024-10-01 13:53:08.719969] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.944 malloc1 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.944 [2024-10-01 13:53:08.948415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.944 [2024-10-01 13:53:08.948524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.944 [2024-10-01 13:53:08.948560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:58.944 [2024-10-01 13:53:08.948580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.944 [2024-10-01 13:53:08.951386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.944 [2024-10-01 13:53:08.951450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.944 pt1 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.944 13:53:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.944 malloc2 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.944 [2024-10-01 13:53:09.020657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.944 [2024-10-01 13:53:09.020753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.944 [2024-10-01 13:53:09.020787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:58.944 [2024-10-01 13:53:09.020802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.944 [2024-10-01 13:53:09.023700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.944 [2024-10-01 13:53:09.023748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.944 pt2 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.944 malloc3 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.944 [2024-10-01 13:53:09.084489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:58.944 [2024-10-01 13:53:09.084580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.944 [2024-10-01 13:53:09.084611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:58.944 [2024-10-01 13:53:09.084626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.944 [2024-10-01 13:53:09.087458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.944 [2024-10-01 13:53:09.087514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:58.944 pt3 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.944 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.203 malloc4 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.203 [2024-10-01 13:53:09.148282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:59.203 [2024-10-01 13:53:09.148377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.203 [2024-10-01 13:53:09.148421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:59.203 [2024-10-01 13:53:09.148438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.203 [2024-10-01 13:53:09.151273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.203 [2024-10-01 13:53:09.151320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:59.203 pt4 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.203 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.203 [2024-10-01 13:53:09.160345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.203 [2024-10-01 13:53:09.162813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:59.203 [2024-10-01 13:53:09.162895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:59.203 [2024-10-01 13:53:09.162972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:59.204 [2024-10-01 13:53:09.163210] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:59.204 [2024-10-01 13:53:09.163240] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:59.204 [2024-10-01 13:53:09.163601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:59.204 [2024-10-01 13:53:09.171920] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:59.204 [2024-10-01 13:53:09.171955] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:59.204 [2024-10-01 13:53:09.172256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.204 "name": "raid_bdev1", 00:18:59.204 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:18:59.204 "strip_size_kb": 64, 00:18:59.204 "state": "online", 00:18:59.204 "raid_level": "raid5f", 00:18:59.204 "superblock": true, 00:18:59.204 "num_base_bdevs": 4, 00:18:59.204 "num_base_bdevs_discovered": 4, 00:18:59.204 "num_base_bdevs_operational": 4, 00:18:59.204 "base_bdevs_list": [ 00:18:59.204 { 00:18:59.204 "name": "pt1", 00:18:59.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.204 "is_configured": true, 00:18:59.204 "data_offset": 2048, 00:18:59.204 "data_size": 63488 00:18:59.204 }, 00:18:59.204 { 00:18:59.204 "name": "pt2", 00:18:59.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.204 "is_configured": true, 00:18:59.204 "data_offset": 2048, 00:18:59.204 "data_size": 63488 00:18:59.204 }, 00:18:59.204 { 00:18:59.204 "name": "pt3", 00:18:59.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.204 "is_configured": true, 00:18:59.204 "data_offset": 2048, 00:18:59.204 "data_size": 63488 00:18:59.204 }, 00:18:59.204 { 00:18:59.204 "name": "pt4", 00:18:59.204 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.204 "is_configured": true, 00:18:59.204 "data_offset": 2048, 00:18:59.204 "data_size": 63488 00:18:59.204 } 00:18:59.204 ] 00:18:59.204 }' 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.204 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.462 [2024-10-01 13:53:09.565866] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.462 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.462 "name": "raid_bdev1", 00:18:59.462 "aliases": [ 00:18:59.462 "9867ea02-7366-4c0d-a1a7-5cfe87496210" 00:18:59.462 ], 00:18:59.462 "product_name": "Raid Volume", 00:18:59.462 "block_size": 512, 00:18:59.462 "num_blocks": 190464, 00:18:59.462 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:18:59.462 "assigned_rate_limits": { 00:18:59.462 "rw_ios_per_sec": 0, 00:18:59.462 "rw_mbytes_per_sec": 0, 00:18:59.462 "r_mbytes_per_sec": 0, 00:18:59.462 "w_mbytes_per_sec": 0 00:18:59.462 }, 00:18:59.462 "claimed": false, 00:18:59.462 "zoned": false, 00:18:59.462 "supported_io_types": { 00:18:59.462 "read": true, 00:18:59.462 "write": true, 00:18:59.462 "unmap": false, 00:18:59.462 "flush": false, 00:18:59.462 "reset": true, 00:18:59.462 "nvme_admin": false, 00:18:59.462 "nvme_io": false, 00:18:59.462 "nvme_io_md": false, 00:18:59.462 "write_zeroes": true, 00:18:59.462 "zcopy": false, 00:18:59.462 "get_zone_info": false, 00:18:59.462 "zone_management": false, 00:18:59.462 "zone_append": false, 00:18:59.462 "compare": false, 00:18:59.462 "compare_and_write": false, 00:18:59.462 "abort": false, 00:18:59.462 "seek_hole": false, 00:18:59.462 "seek_data": false, 00:18:59.462 "copy": false, 00:18:59.462 "nvme_iov_md": false 00:18:59.462 }, 00:18:59.462 "driver_specific": { 00:18:59.462 "raid": { 00:18:59.462 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:18:59.462 "strip_size_kb": 64, 00:18:59.462 "state": "online", 00:18:59.462 "raid_level": "raid5f", 00:18:59.462 "superblock": true, 00:18:59.462 "num_base_bdevs": 4, 00:18:59.462 "num_base_bdevs_discovered": 4, 00:18:59.462 "num_base_bdevs_operational": 4, 00:18:59.462 "base_bdevs_list": [ 00:18:59.462 { 00:18:59.462 "name": "pt1", 00:18:59.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.462 "is_configured": true, 00:18:59.462 "data_offset": 2048, 00:18:59.462 "data_size": 63488 00:18:59.462 }, 00:18:59.462 { 00:18:59.462 "name": "pt2", 00:18:59.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.463 "is_configured": true, 00:18:59.463 "data_offset": 2048, 00:18:59.463 "data_size": 63488 00:18:59.463 }, 00:18:59.463 { 00:18:59.463 "name": "pt3", 00:18:59.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.463 "is_configured": true, 00:18:59.463 "data_offset": 2048, 00:18:59.463 "data_size": 63488 00:18:59.463 }, 00:18:59.463 { 00:18:59.463 "name": "pt4", 00:18:59.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.463 "is_configured": true, 00:18:59.463 "data_offset": 2048, 00:18:59.463 "data_size": 63488 00:18:59.463 } 00:18:59.463 ] 00:18:59.463 } 00:18:59.463 } 00:18:59.463 }' 00:18:59.463 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.463 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:59.463 pt2 00:18:59.463 pt3 00:18:59.463 pt4' 00:18:59.463 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.720 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:59.721 [2024-10-01 13:53:09.865854] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9867ea02-7366-4c0d-a1a7-5cfe87496210 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9867ea02-7366-4c0d-a1a7-5cfe87496210 ']' 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.721 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.721 [2024-10-01 13:53:09.909636] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.721 [2024-10-01 13:53:09.909702] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.721 [2024-10-01 13:53:09.909836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.721 [2024-10-01 13:53:09.909947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.721 [2024-10-01 13:53:09.909971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 [2024-10-01 13:53:10.081666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:59.980 [2024-10-01 13:53:10.084312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:59.980 [2024-10-01 13:53:10.084386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:59.980 [2024-10-01 13:53:10.084452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:59.980 [2024-10-01 13:53:10.084525] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:59.980 [2024-10-01 13:53:10.084606] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:59.980 [2024-10-01 13:53:10.084643] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:59.980 [2024-10-01 13:53:10.084672] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:59.980 [2024-10-01 13:53:10.084694] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.980 [2024-10-01 13:53:10.084716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:59.980 request: 00:18:59.980 { 00:18:59.980 "name": "raid_bdev1", 00:18:59.980 "raid_level": "raid5f", 00:18:59.980 "base_bdevs": [ 00:18:59.980 "malloc1", 00:18:59.980 "malloc2", 00:18:59.980 "malloc3", 00:18:59.980 "malloc4" 00:18:59.980 ], 00:18:59.980 "strip_size_kb": 64, 00:18:59.980 "superblock": false, 00:18:59.980 "method": "bdev_raid_create", 00:18:59.980 "req_id": 1 00:18:59.980 } 00:18:59.980 Got JSON-RPC error response 00:18:59.980 response: 00:18:59.980 { 00:18:59.980 "code": -17, 00:18:59.980 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:59.980 } 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.980 [2024-10-01 13:53:10.149454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:59.980 [2024-10-01 13:53:10.149584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.980 [2024-10-01 13:53:10.149613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:59.980 [2024-10-01 13:53:10.149632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.980 [2024-10-01 13:53:10.152764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.980 [2024-10-01 13:53:10.152828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:59.980 [2024-10-01 13:53:10.152963] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:59.980 [2024-10-01 13:53:10.153061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.980 pt1 00:18:59.980 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.981 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.239 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.239 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.239 "name": "raid_bdev1", 00:19:00.239 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:00.239 "strip_size_kb": 64, 00:19:00.239 "state": "configuring", 00:19:00.239 "raid_level": "raid5f", 00:19:00.239 "superblock": true, 00:19:00.239 "num_base_bdevs": 4, 00:19:00.239 "num_base_bdevs_discovered": 1, 00:19:00.239 "num_base_bdevs_operational": 4, 00:19:00.239 "base_bdevs_list": [ 00:19:00.239 { 00:19:00.239 "name": "pt1", 00:19:00.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.239 "is_configured": true, 00:19:00.239 "data_offset": 2048, 00:19:00.239 "data_size": 63488 00:19:00.239 }, 00:19:00.239 { 00:19:00.239 "name": null, 00:19:00.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.239 "is_configured": false, 00:19:00.239 "data_offset": 2048, 00:19:00.239 "data_size": 63488 00:19:00.239 }, 00:19:00.239 { 00:19:00.239 "name": null, 00:19:00.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:00.239 "is_configured": false, 00:19:00.239 "data_offset": 2048, 00:19:00.239 "data_size": 63488 00:19:00.239 }, 00:19:00.239 { 00:19:00.239 "name": null, 00:19:00.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:00.239 "is_configured": false, 00:19:00.239 "data_offset": 2048, 00:19:00.239 "data_size": 63488 00:19:00.239 } 00:19:00.239 ] 00:19:00.239 }' 00:19:00.239 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.239 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.498 [2024-10-01 13:53:10.596782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:00.498 [2024-10-01 13:53:10.596925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.498 [2024-10-01 13:53:10.596958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:00.498 [2024-10-01 13:53:10.596977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.498 [2024-10-01 13:53:10.597685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.498 [2024-10-01 13:53:10.597731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:00.498 [2024-10-01 13:53:10.597862] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:00.498 [2024-10-01 13:53:10.597899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:00.498 pt2 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.498 [2024-10-01 13:53:10.608813] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.498 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.498 "name": "raid_bdev1", 00:19:00.498 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:00.498 "strip_size_kb": 64, 00:19:00.498 "state": "configuring", 00:19:00.498 "raid_level": "raid5f", 00:19:00.498 "superblock": true, 00:19:00.498 "num_base_bdevs": 4, 00:19:00.498 "num_base_bdevs_discovered": 1, 00:19:00.498 "num_base_bdevs_operational": 4, 00:19:00.498 "base_bdevs_list": [ 00:19:00.498 { 00:19:00.498 "name": "pt1", 00:19:00.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.498 "is_configured": true, 00:19:00.498 "data_offset": 2048, 00:19:00.498 "data_size": 63488 00:19:00.498 }, 00:19:00.498 { 00:19:00.498 "name": null, 00:19:00.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.498 "is_configured": false, 00:19:00.498 "data_offset": 0, 00:19:00.498 "data_size": 63488 00:19:00.498 }, 00:19:00.498 { 00:19:00.499 "name": null, 00:19:00.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:00.499 "is_configured": false, 00:19:00.499 "data_offset": 2048, 00:19:00.499 "data_size": 63488 00:19:00.499 }, 00:19:00.499 { 00:19:00.499 "name": null, 00:19:00.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:00.499 "is_configured": false, 00:19:00.499 "data_offset": 2048, 00:19:00.499 "data_size": 63488 00:19:00.499 } 00:19:00.499 ] 00:19:00.499 }' 00:19:00.499 13:53:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.499 13:53:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.094 [2024-10-01 13:53:11.068163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.094 [2024-10-01 13:53:11.068289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.094 [2024-10-01 13:53:11.068326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:01.094 [2024-10-01 13:53:11.068356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.094 [2024-10-01 13:53:11.069032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.094 [2024-10-01 13:53:11.069073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.094 [2024-10-01 13:53:11.069208] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:01.094 [2024-10-01 13:53:11.069241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.094 pt2 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:01.094 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.095 [2024-10-01 13:53:11.080119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:01.095 [2024-10-01 13:53:11.080238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.095 [2024-10-01 13:53:11.080273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:01.095 [2024-10-01 13:53:11.080288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.095 [2024-10-01 13:53:11.080913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.095 [2024-10-01 13:53:11.080954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:01.095 [2024-10-01 13:53:11.081081] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:01.095 [2024-10-01 13:53:11.081115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:01.095 pt3 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.095 [2024-10-01 13:53:11.092051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:01.095 [2024-10-01 13:53:11.092163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.095 [2024-10-01 13:53:11.092201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:01.095 [2024-10-01 13:53:11.092216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.095 [2024-10-01 13:53:11.092902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.095 [2024-10-01 13:53:11.092936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:01.095 [2024-10-01 13:53:11.093055] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:01.095 [2024-10-01 13:53:11.093087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:01.095 [2024-10-01 13:53:11.093290] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:01.095 [2024-10-01 13:53:11.093320] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:01.095 [2024-10-01 13:53:11.093681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:01.095 [2024-10-01 13:53:11.101546] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:01.095 [2024-10-01 13:53:11.101591] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:01.095 [2024-10-01 13:53:11.101913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.095 pt4 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.095 "name": "raid_bdev1", 00:19:01.095 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:01.095 "strip_size_kb": 64, 00:19:01.095 "state": "online", 00:19:01.095 "raid_level": "raid5f", 00:19:01.095 "superblock": true, 00:19:01.095 "num_base_bdevs": 4, 00:19:01.095 "num_base_bdevs_discovered": 4, 00:19:01.095 "num_base_bdevs_operational": 4, 00:19:01.095 "base_bdevs_list": [ 00:19:01.095 { 00:19:01.095 "name": "pt1", 00:19:01.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.095 "is_configured": true, 00:19:01.095 "data_offset": 2048, 00:19:01.095 "data_size": 63488 00:19:01.095 }, 00:19:01.095 { 00:19:01.095 "name": "pt2", 00:19:01.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.095 "is_configured": true, 00:19:01.095 "data_offset": 2048, 00:19:01.095 "data_size": 63488 00:19:01.095 }, 00:19:01.095 { 00:19:01.095 "name": "pt3", 00:19:01.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:01.095 "is_configured": true, 00:19:01.095 "data_offset": 2048, 00:19:01.095 "data_size": 63488 00:19:01.095 }, 00:19:01.095 { 00:19:01.095 "name": "pt4", 00:19:01.095 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:01.095 "is_configured": true, 00:19:01.095 "data_offset": 2048, 00:19:01.095 "data_size": 63488 00:19:01.095 } 00:19:01.095 ] 00:19:01.095 }' 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.095 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.354 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:01.354 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:01.354 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.354 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.354 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.354 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.612 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.612 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.612 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.612 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.612 [2024-10-01 13:53:11.555995] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.613 "name": "raid_bdev1", 00:19:01.613 "aliases": [ 00:19:01.613 "9867ea02-7366-4c0d-a1a7-5cfe87496210" 00:19:01.613 ], 00:19:01.613 "product_name": "Raid Volume", 00:19:01.613 "block_size": 512, 00:19:01.613 "num_blocks": 190464, 00:19:01.613 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:01.613 "assigned_rate_limits": { 00:19:01.613 "rw_ios_per_sec": 0, 00:19:01.613 "rw_mbytes_per_sec": 0, 00:19:01.613 "r_mbytes_per_sec": 0, 00:19:01.613 "w_mbytes_per_sec": 0 00:19:01.613 }, 00:19:01.613 "claimed": false, 00:19:01.613 "zoned": false, 00:19:01.613 "supported_io_types": { 00:19:01.613 "read": true, 00:19:01.613 "write": true, 00:19:01.613 "unmap": false, 00:19:01.613 "flush": false, 00:19:01.613 "reset": true, 00:19:01.613 "nvme_admin": false, 00:19:01.613 "nvme_io": false, 00:19:01.613 "nvme_io_md": false, 00:19:01.613 "write_zeroes": true, 00:19:01.613 "zcopy": false, 00:19:01.613 "get_zone_info": false, 00:19:01.613 "zone_management": false, 00:19:01.613 "zone_append": false, 00:19:01.613 "compare": false, 00:19:01.613 "compare_and_write": false, 00:19:01.613 "abort": false, 00:19:01.613 "seek_hole": false, 00:19:01.613 "seek_data": false, 00:19:01.613 "copy": false, 00:19:01.613 "nvme_iov_md": false 00:19:01.613 }, 00:19:01.613 "driver_specific": { 00:19:01.613 "raid": { 00:19:01.613 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:01.613 "strip_size_kb": 64, 00:19:01.613 "state": "online", 00:19:01.613 "raid_level": "raid5f", 00:19:01.613 "superblock": true, 00:19:01.613 "num_base_bdevs": 4, 00:19:01.613 "num_base_bdevs_discovered": 4, 00:19:01.613 "num_base_bdevs_operational": 4, 00:19:01.613 "base_bdevs_list": [ 00:19:01.613 { 00:19:01.613 "name": "pt1", 00:19:01.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.613 "is_configured": true, 00:19:01.613 "data_offset": 2048, 00:19:01.613 "data_size": 63488 00:19:01.613 }, 00:19:01.613 { 00:19:01.613 "name": "pt2", 00:19:01.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.613 "is_configured": true, 00:19:01.613 "data_offset": 2048, 00:19:01.613 "data_size": 63488 00:19:01.613 }, 00:19:01.613 { 00:19:01.613 "name": "pt3", 00:19:01.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:01.613 "is_configured": true, 00:19:01.613 "data_offset": 2048, 00:19:01.613 "data_size": 63488 00:19:01.613 }, 00:19:01.613 { 00:19:01.613 "name": "pt4", 00:19:01.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:01.613 "is_configured": true, 00:19:01.613 "data_offset": 2048, 00:19:01.613 "data_size": 63488 00:19:01.613 } 00:19:01.613 ] 00:19:01.613 } 00:19:01.613 } 00:19:01.613 }' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:01.613 pt2 00:19:01.613 pt3 00:19:01.613 pt4' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.613 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:01.872 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.873 [2024-10-01 13:53:11.903984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9867ea02-7366-4c0d-a1a7-5cfe87496210 '!=' 9867ea02-7366-4c0d-a1a7-5cfe87496210 ']' 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.873 [2024-10-01 13:53:11.951836] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.873 13:53:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.873 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.873 "name": "raid_bdev1", 00:19:01.873 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:01.873 "strip_size_kb": 64, 00:19:01.873 "state": "online", 00:19:01.873 "raid_level": "raid5f", 00:19:01.873 "superblock": true, 00:19:01.873 "num_base_bdevs": 4, 00:19:01.873 "num_base_bdevs_discovered": 3, 00:19:01.873 "num_base_bdevs_operational": 3, 00:19:01.873 "base_bdevs_list": [ 00:19:01.873 { 00:19:01.873 "name": null, 00:19:01.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.873 "is_configured": false, 00:19:01.873 "data_offset": 0, 00:19:01.873 "data_size": 63488 00:19:01.873 }, 00:19:01.873 { 00:19:01.873 "name": "pt2", 00:19:01.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.873 "is_configured": true, 00:19:01.873 "data_offset": 2048, 00:19:01.873 "data_size": 63488 00:19:01.873 }, 00:19:01.873 { 00:19:01.873 "name": "pt3", 00:19:01.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:01.873 "is_configured": true, 00:19:01.873 "data_offset": 2048, 00:19:01.873 "data_size": 63488 00:19:01.873 }, 00:19:01.873 { 00:19:01.873 "name": "pt4", 00:19:01.873 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:01.873 "is_configured": true, 00:19:01.873 "data_offset": 2048, 00:19:01.873 "data_size": 63488 00:19:01.873 } 00:19:01.873 ] 00:19:01.873 }' 00:19:01.873 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.873 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 [2024-10-01 13:53:12.403716] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.440 [2024-10-01 13:53:12.403789] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.440 [2024-10-01 13:53:12.403919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.440 [2024-10-01 13:53:12.404025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.440 [2024-10-01 13:53:12.404042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 [2024-10-01 13:53:12.507734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.440 [2024-10-01 13:53:12.507876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.440 [2024-10-01 13:53:12.507912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:02.440 [2024-10-01 13:53:12.507927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.440 [2024-10-01 13:53:12.511093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.440 [2024-10-01 13:53:12.511166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.440 [2024-10-01 13:53:12.511316] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:02.440 [2024-10-01 13:53:12.511383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.440 pt2 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.440 "name": "raid_bdev1", 00:19:02.440 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:02.440 "strip_size_kb": 64, 00:19:02.440 "state": "configuring", 00:19:02.440 "raid_level": "raid5f", 00:19:02.440 "superblock": true, 00:19:02.440 "num_base_bdevs": 4, 00:19:02.440 "num_base_bdevs_discovered": 1, 00:19:02.440 "num_base_bdevs_operational": 3, 00:19:02.440 "base_bdevs_list": [ 00:19:02.440 { 00:19:02.440 "name": null, 00:19:02.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.440 "is_configured": false, 00:19:02.440 "data_offset": 2048, 00:19:02.440 "data_size": 63488 00:19:02.440 }, 00:19:02.440 { 00:19:02.440 "name": "pt2", 00:19:02.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.440 "is_configured": true, 00:19:02.440 "data_offset": 2048, 00:19:02.440 "data_size": 63488 00:19:02.440 }, 00:19:02.440 { 00:19:02.440 "name": null, 00:19:02.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:02.440 "is_configured": false, 00:19:02.440 "data_offset": 2048, 00:19:02.440 "data_size": 63488 00:19:02.440 }, 00:19:02.440 { 00:19:02.440 "name": null, 00:19:02.440 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:02.440 "is_configured": false, 00:19:02.440 "data_offset": 2048, 00:19:02.440 "data_size": 63488 00:19:02.440 } 00:19:02.440 ] 00:19:02.440 }' 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.440 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.009 [2024-10-01 13:53:12.971802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:03.009 [2024-10-01 13:53:12.971938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.009 [2024-10-01 13:53:12.971974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:03.009 [2024-10-01 13:53:12.971990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.009 [2024-10-01 13:53:12.972651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.009 [2024-10-01 13:53:12.972685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:03.009 [2024-10-01 13:53:12.972816] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:03.009 [2024-10-01 13:53:12.972861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:03.009 pt3 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:03.009 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.010 13:53:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.010 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.010 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.010 "name": "raid_bdev1", 00:19:03.010 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:03.010 "strip_size_kb": 64, 00:19:03.010 "state": "configuring", 00:19:03.010 "raid_level": "raid5f", 00:19:03.010 "superblock": true, 00:19:03.010 "num_base_bdevs": 4, 00:19:03.010 "num_base_bdevs_discovered": 2, 00:19:03.010 "num_base_bdevs_operational": 3, 00:19:03.010 "base_bdevs_list": [ 00:19:03.010 { 00:19:03.010 "name": null, 00:19:03.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.010 "is_configured": false, 00:19:03.010 "data_offset": 2048, 00:19:03.010 "data_size": 63488 00:19:03.010 }, 00:19:03.010 { 00:19:03.010 "name": "pt2", 00:19:03.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.010 "is_configured": true, 00:19:03.010 "data_offset": 2048, 00:19:03.010 "data_size": 63488 00:19:03.010 }, 00:19:03.010 { 00:19:03.010 "name": "pt3", 00:19:03.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.010 "is_configured": true, 00:19:03.010 "data_offset": 2048, 00:19:03.010 "data_size": 63488 00:19:03.010 }, 00:19:03.010 { 00:19:03.010 "name": null, 00:19:03.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.010 "is_configured": false, 00:19:03.010 "data_offset": 2048, 00:19:03.010 "data_size": 63488 00:19:03.010 } 00:19:03.010 ] 00:19:03.010 }' 00:19:03.010 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.010 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.269 [2024-10-01 13:53:13.395746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:03.269 [2024-10-01 13:53:13.396195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.269 [2024-10-01 13:53:13.396246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:03.269 [2024-10-01 13:53:13.396262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.269 [2024-10-01 13:53:13.396944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.269 [2024-10-01 13:53:13.396979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:03.269 [2024-10-01 13:53:13.397117] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:03.269 [2024-10-01 13:53:13.397151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:03.269 [2024-10-01 13:53:13.397322] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:03.269 [2024-10-01 13:53:13.397345] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:03.269 [2024-10-01 13:53:13.397693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:03.269 [2024-10-01 13:53:13.405311] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:03.269 [2024-10-01 13:53:13.405358] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:03.269 [2024-10-01 13:53:13.405878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.269 pt4 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.269 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.270 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.270 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.270 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.270 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.270 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.270 "name": "raid_bdev1", 00:19:03.270 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:03.270 "strip_size_kb": 64, 00:19:03.270 "state": "online", 00:19:03.270 "raid_level": "raid5f", 00:19:03.270 "superblock": true, 00:19:03.270 "num_base_bdevs": 4, 00:19:03.270 "num_base_bdevs_discovered": 3, 00:19:03.270 "num_base_bdevs_operational": 3, 00:19:03.270 "base_bdevs_list": [ 00:19:03.270 { 00:19:03.270 "name": null, 00:19:03.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.270 "is_configured": false, 00:19:03.270 "data_offset": 2048, 00:19:03.270 "data_size": 63488 00:19:03.270 }, 00:19:03.270 { 00:19:03.270 "name": "pt2", 00:19:03.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.270 "is_configured": true, 00:19:03.270 "data_offset": 2048, 00:19:03.270 "data_size": 63488 00:19:03.270 }, 00:19:03.270 { 00:19:03.270 "name": "pt3", 00:19:03.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.270 "is_configured": true, 00:19:03.270 "data_offset": 2048, 00:19:03.270 "data_size": 63488 00:19:03.270 }, 00:19:03.270 { 00:19:03.270 "name": "pt4", 00:19:03.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.270 "is_configured": true, 00:19:03.270 "data_offset": 2048, 00:19:03.270 "data_size": 63488 00:19:03.270 } 00:19:03.270 ] 00:19:03.270 }' 00:19:03.270 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.270 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.837 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.837 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.837 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.837 [2024-10-01 13:53:13.879721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.837 [2024-10-01 13:53:13.879788] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.837 [2024-10-01 13:53:13.879918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.837 [2024-10-01 13:53:13.880021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.837 [2024-10-01 13:53:13.880041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:03.837 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.837 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.837 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.837 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.838 [2024-10-01 13:53:13.947729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:03.838 [2024-10-01 13:53:13.948225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.838 [2024-10-01 13:53:13.948277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:03.838 [2024-10-01 13:53:13.948299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.838 [2024-10-01 13:53:13.951886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.838 [2024-10-01 13:53:13.951960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:03.838 [2024-10-01 13:53:13.952121] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:03.838 [2024-10-01 13:53:13.952204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:03.838 pt1 00:19:03.838 [2024-10-01 13:53:13.952475] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:03.838 [2024-10-01 13:53:13.952509] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.838 [2024-10-01 13:53:13.952534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:03.838 [2024-10-01 13:53:13.952627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.838 [2024-10-01 13:53:13.952783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.838 13:53:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.838 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.838 "name": "raid_bdev1", 00:19:03.838 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:03.838 "strip_size_kb": 64, 00:19:03.838 "state": "configuring", 00:19:03.838 "raid_level": "raid5f", 00:19:03.838 "superblock": true, 00:19:03.838 "num_base_bdevs": 4, 00:19:03.838 "num_base_bdevs_discovered": 2, 00:19:03.838 "num_base_bdevs_operational": 3, 00:19:03.838 "base_bdevs_list": [ 00:19:03.838 { 00:19:03.838 "name": null, 00:19:03.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.838 "is_configured": false, 00:19:03.838 "data_offset": 2048, 00:19:03.838 "data_size": 63488 00:19:03.838 }, 00:19:03.838 { 00:19:03.838 "name": "pt2", 00:19:03.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.838 "is_configured": true, 00:19:03.838 "data_offset": 2048, 00:19:03.838 "data_size": 63488 00:19:03.838 }, 00:19:03.838 { 00:19:03.838 "name": "pt3", 00:19:03.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.838 "is_configured": true, 00:19:03.838 "data_offset": 2048, 00:19:03.838 "data_size": 63488 00:19:03.838 }, 00:19:03.838 { 00:19:03.838 "name": null, 00:19:03.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.838 "is_configured": false, 00:19:03.838 "data_offset": 2048, 00:19:03.838 "data_size": 63488 00:19:03.838 } 00:19:03.838 ] 00:19:03.838 }' 00:19:03.838 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.838 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.406 [2024-10-01 13:53:14.475772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:04.406 [2024-10-01 13:53:14.476169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.406 [2024-10-01 13:53:14.476230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:04.406 [2024-10-01 13:53:14.476247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.406 [2024-10-01 13:53:14.476935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.406 [2024-10-01 13:53:14.476972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:04.406 [2024-10-01 13:53:14.477113] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:04.406 [2024-10-01 13:53:14.477148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:04.406 [2024-10-01 13:53:14.477317] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:04.406 [2024-10-01 13:53:14.477331] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:04.406 [2024-10-01 13:53:14.477677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:04.406 [2024-10-01 13:53:14.485206] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:04.406 [2024-10-01 13:53:14.485260] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:04.406 [2024-10-01 13:53:14.485743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.406 pt4 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.406 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.407 "name": "raid_bdev1", 00:19:04.407 "uuid": "9867ea02-7366-4c0d-a1a7-5cfe87496210", 00:19:04.407 "strip_size_kb": 64, 00:19:04.407 "state": "online", 00:19:04.407 "raid_level": "raid5f", 00:19:04.407 "superblock": true, 00:19:04.407 "num_base_bdevs": 4, 00:19:04.407 "num_base_bdevs_discovered": 3, 00:19:04.407 "num_base_bdevs_operational": 3, 00:19:04.407 "base_bdevs_list": [ 00:19:04.407 { 00:19:04.407 "name": null, 00:19:04.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.407 "is_configured": false, 00:19:04.407 "data_offset": 2048, 00:19:04.407 "data_size": 63488 00:19:04.407 }, 00:19:04.407 { 00:19:04.407 "name": "pt2", 00:19:04.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.407 "is_configured": true, 00:19:04.407 "data_offset": 2048, 00:19:04.407 "data_size": 63488 00:19:04.407 }, 00:19:04.407 { 00:19:04.407 "name": "pt3", 00:19:04.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.407 "is_configured": true, 00:19:04.407 "data_offset": 2048, 00:19:04.407 "data_size": 63488 00:19:04.407 }, 00:19:04.407 { 00:19:04.407 "name": "pt4", 00:19:04.407 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.407 "is_configured": true, 00:19:04.407 "data_offset": 2048, 00:19:04.407 "data_size": 63488 00:19:04.407 } 00:19:04.407 ] 00:19:04.407 }' 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.407 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.975 13:53:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.975 [2024-10-01 13:53:14.971991] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9867ea02-7366-4c0d-a1a7-5cfe87496210 '!=' 9867ea02-7366-4c0d-a1a7-5cfe87496210 ']' 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84161 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84161 ']' 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84161 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84161 00:19:04.975 killing process with pid 84161 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84161' 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84161 00:19:04.975 [2024-10-01 13:53:15.051275] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.975 13:53:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84161 00:19:04.975 [2024-10-01 13:53:15.051490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.975 [2024-10-01 13:53:15.051630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.975 [2024-10-01 13:53:15.051651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:05.542 [2024-10-01 13:53:15.509806] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.915 13:53:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:06.915 00:19:06.915 real 0m9.022s 00:19:06.915 user 0m13.828s 00:19:06.915 sys 0m1.932s 00:19:06.915 13:53:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.915 13:53:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.915 ************************************ 00:19:06.915 END TEST raid5f_superblock_test 00:19:06.915 ************************************ 00:19:06.915 13:53:17 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:06.915 13:53:17 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:06.915 13:53:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:06.915 13:53:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.915 13:53:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.915 ************************************ 00:19:06.915 START TEST raid5f_rebuild_test 00:19:06.915 ************************************ 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84652 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84652 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84652 ']' 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.915 13:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.174 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:07.174 Zero copy mechanism will not be used. 00:19:07.174 [2024-10-01 13:53:17.161773] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:07.174 [2024-10-01 13:53:17.161957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84652 ] 00:19:07.174 [2024-10-01 13:53:17.341833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.432 [2024-10-01 13:53:17.572365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.689 [2024-10-01 13:53:17.787323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.689 [2024-10-01 13:53:17.787372] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.947 BaseBdev1_malloc 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.947 [2024-10-01 13:53:18.086125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:07.947 [2024-10-01 13:53:18.087131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.947 [2024-10-01 13:53:18.087173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:07.947 [2024-10-01 13:53:18.087193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.947 [2024-10-01 13:53:18.089822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.947 [2024-10-01 13:53:18.089868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.947 BaseBdev1 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.947 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 BaseBdev2_malloc 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 [2024-10-01 13:53:18.152044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:08.207 [2024-10-01 13:53:18.152249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.207 [2024-10-01 13:53:18.152308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:08.207 [2024-10-01 13:53:18.152473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.207 [2024-10-01 13:53:18.154894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.207 [2024-10-01 13:53:18.154936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:08.207 BaseBdev2 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 BaseBdev3_malloc 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 [2024-10-01 13:53:18.210758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:08.207 [2024-10-01 13:53:18.210936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.207 [2024-10-01 13:53:18.211034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:08.207 [2024-10-01 13:53:18.211110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.207 [2024-10-01 13:53:18.213574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.207 [2024-10-01 13:53:18.213713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:08.207 BaseBdev3 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 BaseBdev4_malloc 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 [2024-10-01 13:53:18.266832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:08.207 [2024-10-01 13:53:18.267076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.207 [2024-10-01 13:53:18.267136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:08.207 [2024-10-01 13:53:18.267226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.207 [2024-10-01 13:53:18.269708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.207 [2024-10-01 13:53:18.269858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:08.207 BaseBdev4 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 spare_malloc 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 spare_delay 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 [2024-10-01 13:53:18.333469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:08.207 [2024-10-01 13:53:18.333642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.207 [2024-10-01 13:53:18.333697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:08.207 [2024-10-01 13:53:18.333774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.207 [2024-10-01 13:53:18.336228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.207 [2024-10-01 13:53:18.336393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:08.207 spare 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.207 [2024-10-01 13:53:18.345528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.207 [2024-10-01 13:53:18.347663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.207 [2024-10-01 13:53:18.347832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.207 [2024-10-01 13:53:18.347923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:08.207 [2024-10-01 13:53:18.348092] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:08.207 [2024-10-01 13:53:18.348136] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:08.207 [2024-10-01 13:53:18.348451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:08.207 [2024-10-01 13:53:18.356626] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:08.207 [2024-10-01 13:53:18.356785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:08.207 [2024-10-01 13:53:18.357125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.207 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.208 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.468 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.468 "name": "raid_bdev1", 00:19:08.468 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:08.468 "strip_size_kb": 64, 00:19:08.468 "state": "online", 00:19:08.468 "raid_level": "raid5f", 00:19:08.468 "superblock": false, 00:19:08.468 "num_base_bdevs": 4, 00:19:08.468 "num_base_bdevs_discovered": 4, 00:19:08.468 "num_base_bdevs_operational": 4, 00:19:08.468 "base_bdevs_list": [ 00:19:08.468 { 00:19:08.468 "name": "BaseBdev1", 00:19:08.468 "uuid": "e8a01ac4-3fc4-5194-bafd-c3766837f0b7", 00:19:08.468 "is_configured": true, 00:19:08.468 "data_offset": 0, 00:19:08.468 "data_size": 65536 00:19:08.468 }, 00:19:08.468 { 00:19:08.468 "name": "BaseBdev2", 00:19:08.468 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:08.468 "is_configured": true, 00:19:08.468 "data_offset": 0, 00:19:08.468 "data_size": 65536 00:19:08.468 }, 00:19:08.468 { 00:19:08.468 "name": "BaseBdev3", 00:19:08.468 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:08.468 "is_configured": true, 00:19:08.468 "data_offset": 0, 00:19:08.468 "data_size": 65536 00:19:08.468 }, 00:19:08.468 { 00:19:08.468 "name": "BaseBdev4", 00:19:08.468 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:08.468 "is_configured": true, 00:19:08.468 "data_offset": 0, 00:19:08.468 "data_size": 65536 00:19:08.468 } 00:19:08.468 ] 00:19:08.468 }' 00:19:08.468 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.468 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.727 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.727 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.727 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.727 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:08.727 [2024-10-01 13:53:18.805833] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.727 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.728 13:53:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:08.997 [2024-10-01 13:53:19.109324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:08.997 /dev/nbd0 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.997 1+0 records in 00:19:08.997 1+0 records out 00:19:08.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748554 s, 5.5 MB/s 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:08.997 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:09.256 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:09.833 512+0 records in 00:19:09.833 512+0 records out 00:19:09.833 100663296 bytes (101 MB, 96 MiB) copied, 0.551547 s, 183 MB/s 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.833 [2024-10-01 13:53:19.996724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.833 13:53:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.833 [2024-10-01 13:53:20.014509] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.833 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.091 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.092 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.092 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.092 "name": "raid_bdev1", 00:19:10.092 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:10.092 "strip_size_kb": 64, 00:19:10.092 "state": "online", 00:19:10.092 "raid_level": "raid5f", 00:19:10.092 "superblock": false, 00:19:10.092 "num_base_bdevs": 4, 00:19:10.092 "num_base_bdevs_discovered": 3, 00:19:10.092 "num_base_bdevs_operational": 3, 00:19:10.092 "base_bdevs_list": [ 00:19:10.092 { 00:19:10.092 "name": null, 00:19:10.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.092 "is_configured": false, 00:19:10.092 "data_offset": 0, 00:19:10.092 "data_size": 65536 00:19:10.092 }, 00:19:10.092 { 00:19:10.092 "name": "BaseBdev2", 00:19:10.092 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:10.092 "is_configured": true, 00:19:10.092 "data_offset": 0, 00:19:10.092 "data_size": 65536 00:19:10.092 }, 00:19:10.092 { 00:19:10.092 "name": "BaseBdev3", 00:19:10.092 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:10.092 "is_configured": true, 00:19:10.092 "data_offset": 0, 00:19:10.092 "data_size": 65536 00:19:10.092 }, 00:19:10.092 { 00:19:10.092 "name": "BaseBdev4", 00:19:10.092 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:10.092 "is_configured": true, 00:19:10.092 "data_offset": 0, 00:19:10.092 "data_size": 65536 00:19:10.092 } 00:19:10.092 ] 00:19:10.092 }' 00:19:10.092 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.092 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.350 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.350 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.350 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.350 [2024-10-01 13:53:20.425896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.350 [2024-10-01 13:53:20.444301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:10.350 13:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.350 13:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:10.350 [2024-10-01 13:53:20.455034] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.286 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.544 "name": "raid_bdev1", 00:19:11.544 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:11.544 "strip_size_kb": 64, 00:19:11.544 "state": "online", 00:19:11.544 "raid_level": "raid5f", 00:19:11.544 "superblock": false, 00:19:11.544 "num_base_bdevs": 4, 00:19:11.544 "num_base_bdevs_discovered": 4, 00:19:11.544 "num_base_bdevs_operational": 4, 00:19:11.544 "process": { 00:19:11.544 "type": "rebuild", 00:19:11.544 "target": "spare", 00:19:11.544 "progress": { 00:19:11.544 "blocks": 17280, 00:19:11.544 "percent": 8 00:19:11.544 } 00:19:11.544 }, 00:19:11.544 "base_bdevs_list": [ 00:19:11.544 { 00:19:11.544 "name": "spare", 00:19:11.544 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:11.544 "is_configured": true, 00:19:11.544 "data_offset": 0, 00:19:11.544 "data_size": 65536 00:19:11.544 }, 00:19:11.544 { 00:19:11.544 "name": "BaseBdev2", 00:19:11.544 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:11.544 "is_configured": true, 00:19:11.544 "data_offset": 0, 00:19:11.544 "data_size": 65536 00:19:11.544 }, 00:19:11.544 { 00:19:11.544 "name": "BaseBdev3", 00:19:11.544 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:11.544 "is_configured": true, 00:19:11.544 "data_offset": 0, 00:19:11.544 "data_size": 65536 00:19:11.544 }, 00:19:11.544 { 00:19:11.544 "name": "BaseBdev4", 00:19:11.544 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:11.544 "is_configured": true, 00:19:11.544 "data_offset": 0, 00:19:11.544 "data_size": 65536 00:19:11.544 } 00:19:11.544 ] 00:19:11.544 }' 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.544 [2024-10-01 13:53:21.586646] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.544 [2024-10-01 13:53:21.664715] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.544 [2024-10-01 13:53:21.664817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.544 [2024-10-01 13:53:21.664838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.544 [2024-10-01 13:53:21.664852] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.544 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.545 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.858 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.858 "name": "raid_bdev1", 00:19:11.858 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:11.858 "strip_size_kb": 64, 00:19:11.858 "state": "online", 00:19:11.858 "raid_level": "raid5f", 00:19:11.858 "superblock": false, 00:19:11.858 "num_base_bdevs": 4, 00:19:11.858 "num_base_bdevs_discovered": 3, 00:19:11.858 "num_base_bdevs_operational": 3, 00:19:11.858 "base_bdevs_list": [ 00:19:11.858 { 00:19:11.858 "name": null, 00:19:11.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.858 "is_configured": false, 00:19:11.858 "data_offset": 0, 00:19:11.858 "data_size": 65536 00:19:11.858 }, 00:19:11.858 { 00:19:11.858 "name": "BaseBdev2", 00:19:11.858 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:11.858 "is_configured": true, 00:19:11.858 "data_offset": 0, 00:19:11.858 "data_size": 65536 00:19:11.858 }, 00:19:11.858 { 00:19:11.858 "name": "BaseBdev3", 00:19:11.858 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:11.858 "is_configured": true, 00:19:11.858 "data_offset": 0, 00:19:11.858 "data_size": 65536 00:19:11.858 }, 00:19:11.858 { 00:19:11.858 "name": "BaseBdev4", 00:19:11.858 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:11.859 "is_configured": true, 00:19:11.859 "data_offset": 0, 00:19:11.859 "data_size": 65536 00:19:11.859 } 00:19:11.859 ] 00:19:11.859 }' 00:19:11.859 13:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.859 13:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.117 "name": "raid_bdev1", 00:19:12.117 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:12.117 "strip_size_kb": 64, 00:19:12.117 "state": "online", 00:19:12.117 "raid_level": "raid5f", 00:19:12.117 "superblock": false, 00:19:12.117 "num_base_bdevs": 4, 00:19:12.117 "num_base_bdevs_discovered": 3, 00:19:12.117 "num_base_bdevs_operational": 3, 00:19:12.117 "base_bdevs_list": [ 00:19:12.117 { 00:19:12.117 "name": null, 00:19:12.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.117 "is_configured": false, 00:19:12.117 "data_offset": 0, 00:19:12.117 "data_size": 65536 00:19:12.117 }, 00:19:12.117 { 00:19:12.117 "name": "BaseBdev2", 00:19:12.117 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:12.117 "is_configured": true, 00:19:12.117 "data_offset": 0, 00:19:12.117 "data_size": 65536 00:19:12.117 }, 00:19:12.117 { 00:19:12.117 "name": "BaseBdev3", 00:19:12.117 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:12.117 "is_configured": true, 00:19:12.117 "data_offset": 0, 00:19:12.117 "data_size": 65536 00:19:12.117 }, 00:19:12.117 { 00:19:12.117 "name": "BaseBdev4", 00:19:12.117 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:12.117 "is_configured": true, 00:19:12.117 "data_offset": 0, 00:19:12.117 "data_size": 65536 00:19:12.117 } 00:19:12.117 ] 00:19:12.117 }' 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.117 [2024-10-01 13:53:22.276127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.117 [2024-10-01 13:53:22.291732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.117 13:53:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:12.117 [2024-10-01 13:53:22.302238] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.494 "name": "raid_bdev1", 00:19:13.494 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:13.494 "strip_size_kb": 64, 00:19:13.494 "state": "online", 00:19:13.494 "raid_level": "raid5f", 00:19:13.494 "superblock": false, 00:19:13.494 "num_base_bdevs": 4, 00:19:13.494 "num_base_bdevs_discovered": 4, 00:19:13.494 "num_base_bdevs_operational": 4, 00:19:13.494 "process": { 00:19:13.494 "type": "rebuild", 00:19:13.494 "target": "spare", 00:19:13.494 "progress": { 00:19:13.494 "blocks": 17280, 00:19:13.494 "percent": 8 00:19:13.494 } 00:19:13.494 }, 00:19:13.494 "base_bdevs_list": [ 00:19:13.494 { 00:19:13.494 "name": "spare", 00:19:13.494 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 0, 00:19:13.494 "data_size": 65536 00:19:13.494 }, 00:19:13.494 { 00:19:13.494 "name": "BaseBdev2", 00:19:13.494 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 0, 00:19:13.494 "data_size": 65536 00:19:13.494 }, 00:19:13.494 { 00:19:13.494 "name": "BaseBdev3", 00:19:13.494 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 0, 00:19:13.494 "data_size": 65536 00:19:13.494 }, 00:19:13.494 { 00:19:13.494 "name": "BaseBdev4", 00:19:13.494 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 0, 00:19:13.494 "data_size": 65536 00:19:13.494 } 00:19:13.494 ] 00:19:13.494 }' 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=638 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.494 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.494 "name": "raid_bdev1", 00:19:13.494 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:13.494 "strip_size_kb": 64, 00:19:13.494 "state": "online", 00:19:13.494 "raid_level": "raid5f", 00:19:13.494 "superblock": false, 00:19:13.494 "num_base_bdevs": 4, 00:19:13.494 "num_base_bdevs_discovered": 4, 00:19:13.494 "num_base_bdevs_operational": 4, 00:19:13.494 "process": { 00:19:13.494 "type": "rebuild", 00:19:13.494 "target": "spare", 00:19:13.494 "progress": { 00:19:13.494 "blocks": 21120, 00:19:13.494 "percent": 10 00:19:13.494 } 00:19:13.494 }, 00:19:13.494 "base_bdevs_list": [ 00:19:13.494 { 00:19:13.494 "name": "spare", 00:19:13.494 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 0, 00:19:13.494 "data_size": 65536 00:19:13.494 }, 00:19:13.494 { 00:19:13.494 "name": "BaseBdev2", 00:19:13.494 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 0, 00:19:13.494 "data_size": 65536 00:19:13.494 }, 00:19:13.494 { 00:19:13.494 "name": "BaseBdev3", 00:19:13.495 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:13.495 "is_configured": true, 00:19:13.495 "data_offset": 0, 00:19:13.495 "data_size": 65536 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "name": "BaseBdev4", 00:19:13.495 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:13.495 "is_configured": true, 00:19:13.495 "data_offset": 0, 00:19:13.495 "data_size": 65536 00:19:13.495 } 00:19:13.495 ] 00:19:13.495 }' 00:19:13.495 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.495 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.495 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.495 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.495 13:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.434 13:53:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.693 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.693 "name": "raid_bdev1", 00:19:14.693 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:14.693 "strip_size_kb": 64, 00:19:14.693 "state": "online", 00:19:14.693 "raid_level": "raid5f", 00:19:14.693 "superblock": false, 00:19:14.693 "num_base_bdevs": 4, 00:19:14.693 "num_base_bdevs_discovered": 4, 00:19:14.693 "num_base_bdevs_operational": 4, 00:19:14.693 "process": { 00:19:14.693 "type": "rebuild", 00:19:14.693 "target": "spare", 00:19:14.693 "progress": { 00:19:14.693 "blocks": 42240, 00:19:14.693 "percent": 21 00:19:14.693 } 00:19:14.693 }, 00:19:14.693 "base_bdevs_list": [ 00:19:14.693 { 00:19:14.693 "name": "spare", 00:19:14.693 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:14.693 "is_configured": true, 00:19:14.693 "data_offset": 0, 00:19:14.693 "data_size": 65536 00:19:14.693 }, 00:19:14.693 { 00:19:14.693 "name": "BaseBdev2", 00:19:14.693 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:14.693 "is_configured": true, 00:19:14.693 "data_offset": 0, 00:19:14.693 "data_size": 65536 00:19:14.693 }, 00:19:14.693 { 00:19:14.693 "name": "BaseBdev3", 00:19:14.693 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:14.693 "is_configured": true, 00:19:14.693 "data_offset": 0, 00:19:14.693 "data_size": 65536 00:19:14.693 }, 00:19:14.693 { 00:19:14.693 "name": "BaseBdev4", 00:19:14.693 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:14.693 "is_configured": true, 00:19:14.693 "data_offset": 0, 00:19:14.693 "data_size": 65536 00:19:14.693 } 00:19:14.693 ] 00:19:14.693 }' 00:19:14.693 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.693 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.693 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.693 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.693 13:53:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.631 "name": "raid_bdev1", 00:19:15.631 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:15.631 "strip_size_kb": 64, 00:19:15.631 "state": "online", 00:19:15.631 "raid_level": "raid5f", 00:19:15.631 "superblock": false, 00:19:15.631 "num_base_bdevs": 4, 00:19:15.631 "num_base_bdevs_discovered": 4, 00:19:15.631 "num_base_bdevs_operational": 4, 00:19:15.631 "process": { 00:19:15.631 "type": "rebuild", 00:19:15.631 "target": "spare", 00:19:15.631 "progress": { 00:19:15.631 "blocks": 65280, 00:19:15.631 "percent": 33 00:19:15.631 } 00:19:15.631 }, 00:19:15.631 "base_bdevs_list": [ 00:19:15.631 { 00:19:15.631 "name": "spare", 00:19:15.631 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:15.631 "is_configured": true, 00:19:15.631 "data_offset": 0, 00:19:15.631 "data_size": 65536 00:19:15.631 }, 00:19:15.631 { 00:19:15.631 "name": "BaseBdev2", 00:19:15.631 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:15.631 "is_configured": true, 00:19:15.631 "data_offset": 0, 00:19:15.631 "data_size": 65536 00:19:15.631 }, 00:19:15.631 { 00:19:15.631 "name": "BaseBdev3", 00:19:15.631 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:15.631 "is_configured": true, 00:19:15.631 "data_offset": 0, 00:19:15.631 "data_size": 65536 00:19:15.631 }, 00:19:15.631 { 00:19:15.631 "name": "BaseBdev4", 00:19:15.631 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:15.631 "is_configured": true, 00:19:15.631 "data_offset": 0, 00:19:15.631 "data_size": 65536 00:19:15.631 } 00:19:15.631 ] 00:19:15.631 }' 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.631 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.890 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.890 13:53:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.827 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.828 "name": "raid_bdev1", 00:19:16.828 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:16.828 "strip_size_kb": 64, 00:19:16.828 "state": "online", 00:19:16.828 "raid_level": "raid5f", 00:19:16.828 "superblock": false, 00:19:16.828 "num_base_bdevs": 4, 00:19:16.828 "num_base_bdevs_discovered": 4, 00:19:16.828 "num_base_bdevs_operational": 4, 00:19:16.828 "process": { 00:19:16.828 "type": "rebuild", 00:19:16.828 "target": "spare", 00:19:16.828 "progress": { 00:19:16.828 "blocks": 86400, 00:19:16.828 "percent": 43 00:19:16.828 } 00:19:16.828 }, 00:19:16.828 "base_bdevs_list": [ 00:19:16.828 { 00:19:16.828 "name": "spare", 00:19:16.828 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:16.828 "is_configured": true, 00:19:16.828 "data_offset": 0, 00:19:16.828 "data_size": 65536 00:19:16.828 }, 00:19:16.828 { 00:19:16.828 "name": "BaseBdev2", 00:19:16.828 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:16.828 "is_configured": true, 00:19:16.828 "data_offset": 0, 00:19:16.828 "data_size": 65536 00:19:16.828 }, 00:19:16.828 { 00:19:16.828 "name": "BaseBdev3", 00:19:16.828 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:16.828 "is_configured": true, 00:19:16.828 "data_offset": 0, 00:19:16.828 "data_size": 65536 00:19:16.828 }, 00:19:16.828 { 00:19:16.828 "name": "BaseBdev4", 00:19:16.828 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:16.828 "is_configured": true, 00:19:16.828 "data_offset": 0, 00:19:16.828 "data_size": 65536 00:19:16.828 } 00:19:16.828 ] 00:19:16.828 }' 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.828 13:53:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:18.206 13:53:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:18.206 13:53:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.206 "name": "raid_bdev1", 00:19:18.206 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:18.206 "strip_size_kb": 64, 00:19:18.206 "state": "online", 00:19:18.206 "raid_level": "raid5f", 00:19:18.206 "superblock": false, 00:19:18.206 "num_base_bdevs": 4, 00:19:18.206 "num_base_bdevs_discovered": 4, 00:19:18.206 "num_base_bdevs_operational": 4, 00:19:18.206 "process": { 00:19:18.206 "type": "rebuild", 00:19:18.206 "target": "spare", 00:19:18.206 "progress": { 00:19:18.206 "blocks": 107520, 00:19:18.206 "percent": 54 00:19:18.206 } 00:19:18.206 }, 00:19:18.206 "base_bdevs_list": [ 00:19:18.206 { 00:19:18.206 "name": "spare", 00:19:18.206 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:18.206 "is_configured": true, 00:19:18.206 "data_offset": 0, 00:19:18.206 "data_size": 65536 00:19:18.206 }, 00:19:18.206 { 00:19:18.206 "name": "BaseBdev2", 00:19:18.206 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:18.206 "is_configured": true, 00:19:18.206 "data_offset": 0, 00:19:18.206 "data_size": 65536 00:19:18.206 }, 00:19:18.206 { 00:19:18.206 "name": "BaseBdev3", 00:19:18.206 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:18.206 "is_configured": true, 00:19:18.206 "data_offset": 0, 00:19:18.206 "data_size": 65536 00:19:18.206 }, 00:19:18.206 { 00:19:18.206 "name": "BaseBdev4", 00:19:18.206 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:18.206 "is_configured": true, 00:19:18.206 "data_offset": 0, 00:19:18.206 "data_size": 65536 00:19:18.206 } 00:19:18.206 ] 00:19:18.206 }' 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.206 13:53:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.142 "name": "raid_bdev1", 00:19:19.142 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:19.142 "strip_size_kb": 64, 00:19:19.142 "state": "online", 00:19:19.142 "raid_level": "raid5f", 00:19:19.142 "superblock": false, 00:19:19.142 "num_base_bdevs": 4, 00:19:19.142 "num_base_bdevs_discovered": 4, 00:19:19.142 "num_base_bdevs_operational": 4, 00:19:19.142 "process": { 00:19:19.142 "type": "rebuild", 00:19:19.142 "target": "spare", 00:19:19.142 "progress": { 00:19:19.142 "blocks": 130560, 00:19:19.142 "percent": 66 00:19:19.142 } 00:19:19.142 }, 00:19:19.142 "base_bdevs_list": [ 00:19:19.142 { 00:19:19.142 "name": "spare", 00:19:19.142 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:19.142 "is_configured": true, 00:19:19.142 "data_offset": 0, 00:19:19.142 "data_size": 65536 00:19:19.142 }, 00:19:19.142 { 00:19:19.142 "name": "BaseBdev2", 00:19:19.142 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:19.142 "is_configured": true, 00:19:19.142 "data_offset": 0, 00:19:19.142 "data_size": 65536 00:19:19.142 }, 00:19:19.142 { 00:19:19.142 "name": "BaseBdev3", 00:19:19.142 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:19.142 "is_configured": true, 00:19:19.142 "data_offset": 0, 00:19:19.142 "data_size": 65536 00:19:19.142 }, 00:19:19.142 { 00:19:19.142 "name": "BaseBdev4", 00:19:19.142 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:19.142 "is_configured": true, 00:19:19.142 "data_offset": 0, 00:19:19.142 "data_size": 65536 00:19:19.142 } 00:19:19.142 ] 00:19:19.142 }' 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.142 13:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.549 "name": "raid_bdev1", 00:19:20.549 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:20.549 "strip_size_kb": 64, 00:19:20.549 "state": "online", 00:19:20.549 "raid_level": "raid5f", 00:19:20.549 "superblock": false, 00:19:20.549 "num_base_bdevs": 4, 00:19:20.549 "num_base_bdevs_discovered": 4, 00:19:20.549 "num_base_bdevs_operational": 4, 00:19:20.549 "process": { 00:19:20.549 "type": "rebuild", 00:19:20.549 "target": "spare", 00:19:20.549 "progress": { 00:19:20.549 "blocks": 151680, 00:19:20.549 "percent": 77 00:19:20.549 } 00:19:20.549 }, 00:19:20.549 "base_bdevs_list": [ 00:19:20.549 { 00:19:20.549 "name": "spare", 00:19:20.549 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:20.549 "is_configured": true, 00:19:20.549 "data_offset": 0, 00:19:20.549 "data_size": 65536 00:19:20.549 }, 00:19:20.549 { 00:19:20.549 "name": "BaseBdev2", 00:19:20.549 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:20.549 "is_configured": true, 00:19:20.549 "data_offset": 0, 00:19:20.549 "data_size": 65536 00:19:20.549 }, 00:19:20.549 { 00:19:20.549 "name": "BaseBdev3", 00:19:20.549 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:20.549 "is_configured": true, 00:19:20.549 "data_offset": 0, 00:19:20.549 "data_size": 65536 00:19:20.549 }, 00:19:20.549 { 00:19:20.549 "name": "BaseBdev4", 00:19:20.549 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:20.549 "is_configured": true, 00:19:20.549 "data_offset": 0, 00:19:20.549 "data_size": 65536 00:19:20.549 } 00:19:20.549 ] 00:19:20.549 }' 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.549 13:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.486 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.486 "name": "raid_bdev1", 00:19:21.486 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:21.486 "strip_size_kb": 64, 00:19:21.486 "state": "online", 00:19:21.486 "raid_level": "raid5f", 00:19:21.486 "superblock": false, 00:19:21.486 "num_base_bdevs": 4, 00:19:21.486 "num_base_bdevs_discovered": 4, 00:19:21.486 "num_base_bdevs_operational": 4, 00:19:21.486 "process": { 00:19:21.486 "type": "rebuild", 00:19:21.486 "target": "spare", 00:19:21.486 "progress": { 00:19:21.486 "blocks": 174720, 00:19:21.486 "percent": 88 00:19:21.486 } 00:19:21.486 }, 00:19:21.486 "base_bdevs_list": [ 00:19:21.487 { 00:19:21.487 "name": "spare", 00:19:21.487 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:21.487 "is_configured": true, 00:19:21.487 "data_offset": 0, 00:19:21.487 "data_size": 65536 00:19:21.487 }, 00:19:21.487 { 00:19:21.487 "name": "BaseBdev2", 00:19:21.487 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:21.487 "is_configured": true, 00:19:21.487 "data_offset": 0, 00:19:21.487 "data_size": 65536 00:19:21.487 }, 00:19:21.487 { 00:19:21.487 "name": "BaseBdev3", 00:19:21.487 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:21.487 "is_configured": true, 00:19:21.487 "data_offset": 0, 00:19:21.487 "data_size": 65536 00:19:21.487 }, 00:19:21.487 { 00:19:21.487 "name": "BaseBdev4", 00:19:21.487 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:21.487 "is_configured": true, 00:19:21.487 "data_offset": 0, 00:19:21.487 "data_size": 65536 00:19:21.487 } 00:19:21.487 ] 00:19:21.487 }' 00:19:21.487 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.487 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.487 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.487 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.487 13:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:22.424 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.424 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.424 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.424 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.424 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.424 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.733 "name": "raid_bdev1", 00:19:22.733 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:22.733 "strip_size_kb": 64, 00:19:22.733 "state": "online", 00:19:22.733 "raid_level": "raid5f", 00:19:22.733 "superblock": false, 00:19:22.733 "num_base_bdevs": 4, 00:19:22.733 "num_base_bdevs_discovered": 4, 00:19:22.733 "num_base_bdevs_operational": 4, 00:19:22.733 "process": { 00:19:22.733 "type": "rebuild", 00:19:22.733 "target": "spare", 00:19:22.733 "progress": { 00:19:22.733 "blocks": 195840, 00:19:22.733 "percent": 99 00:19:22.733 } 00:19:22.733 }, 00:19:22.733 "base_bdevs_list": [ 00:19:22.733 { 00:19:22.733 "name": "spare", 00:19:22.733 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 0, 00:19:22.733 "data_size": 65536 00:19:22.733 }, 00:19:22.733 { 00:19:22.733 "name": "BaseBdev2", 00:19:22.733 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 0, 00:19:22.733 "data_size": 65536 00:19:22.733 }, 00:19:22.733 { 00:19:22.733 "name": "BaseBdev3", 00:19:22.733 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 0, 00:19:22.733 "data_size": 65536 00:19:22.733 }, 00:19:22.733 { 00:19:22.733 "name": "BaseBdev4", 00:19:22.733 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 0, 00:19:22.733 "data_size": 65536 00:19:22.733 } 00:19:22.733 ] 00:19:22.733 }' 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.733 [2024-10-01 13:53:32.679276] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:22.733 [2024-10-01 13:53:32.679356] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:22.733 [2024-10-01 13:53:32.679428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.733 13:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.671 "name": "raid_bdev1", 00:19:23.671 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:23.671 "strip_size_kb": 64, 00:19:23.671 "state": "online", 00:19:23.671 "raid_level": "raid5f", 00:19:23.671 "superblock": false, 00:19:23.671 "num_base_bdevs": 4, 00:19:23.671 "num_base_bdevs_discovered": 4, 00:19:23.671 "num_base_bdevs_operational": 4, 00:19:23.671 "base_bdevs_list": [ 00:19:23.671 { 00:19:23.671 "name": "spare", 00:19:23.671 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:23.671 "is_configured": true, 00:19:23.671 "data_offset": 0, 00:19:23.671 "data_size": 65536 00:19:23.671 }, 00:19:23.671 { 00:19:23.671 "name": "BaseBdev2", 00:19:23.671 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:23.671 "is_configured": true, 00:19:23.671 "data_offset": 0, 00:19:23.671 "data_size": 65536 00:19:23.671 }, 00:19:23.671 { 00:19:23.671 "name": "BaseBdev3", 00:19:23.671 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:23.671 "is_configured": true, 00:19:23.671 "data_offset": 0, 00:19:23.671 "data_size": 65536 00:19:23.671 }, 00:19:23.671 { 00:19:23.671 "name": "BaseBdev4", 00:19:23.671 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:23.671 "is_configured": true, 00:19:23.671 "data_offset": 0, 00:19:23.671 "data_size": 65536 00:19:23.671 } 00:19:23.671 ] 00:19:23.671 }' 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:23.671 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.931 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.931 "name": "raid_bdev1", 00:19:23.931 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:23.931 "strip_size_kb": 64, 00:19:23.931 "state": "online", 00:19:23.931 "raid_level": "raid5f", 00:19:23.931 "superblock": false, 00:19:23.931 "num_base_bdevs": 4, 00:19:23.931 "num_base_bdevs_discovered": 4, 00:19:23.931 "num_base_bdevs_operational": 4, 00:19:23.931 "base_bdevs_list": [ 00:19:23.931 { 00:19:23.931 "name": "spare", 00:19:23.931 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:23.931 "is_configured": true, 00:19:23.931 "data_offset": 0, 00:19:23.931 "data_size": 65536 00:19:23.931 }, 00:19:23.931 { 00:19:23.931 "name": "BaseBdev2", 00:19:23.931 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:23.931 "is_configured": true, 00:19:23.932 "data_offset": 0, 00:19:23.932 "data_size": 65536 00:19:23.932 }, 00:19:23.932 { 00:19:23.932 "name": "BaseBdev3", 00:19:23.932 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:23.932 "is_configured": true, 00:19:23.932 "data_offset": 0, 00:19:23.932 "data_size": 65536 00:19:23.932 }, 00:19:23.932 { 00:19:23.932 "name": "BaseBdev4", 00:19:23.932 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:23.932 "is_configured": true, 00:19:23.932 "data_offset": 0, 00:19:23.932 "data_size": 65536 00:19:23.932 } 00:19:23.932 ] 00:19:23.932 }' 00:19:23.932 13:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.932 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.191 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.191 "name": "raid_bdev1", 00:19:24.191 "uuid": "d10d4b5a-891a-48db-a1b2-8dc1b33e47d2", 00:19:24.191 "strip_size_kb": 64, 00:19:24.191 "state": "online", 00:19:24.191 "raid_level": "raid5f", 00:19:24.191 "superblock": false, 00:19:24.191 "num_base_bdevs": 4, 00:19:24.191 "num_base_bdevs_discovered": 4, 00:19:24.191 "num_base_bdevs_operational": 4, 00:19:24.191 "base_bdevs_list": [ 00:19:24.191 { 00:19:24.191 "name": "spare", 00:19:24.191 "uuid": "802bba6a-0574-58fc-aea6-f139adfea502", 00:19:24.191 "is_configured": true, 00:19:24.191 "data_offset": 0, 00:19:24.191 "data_size": 65536 00:19:24.191 }, 00:19:24.191 { 00:19:24.191 "name": "BaseBdev2", 00:19:24.191 "uuid": "0e41a61b-1b5c-5c36-ae92-ea766a1e0faa", 00:19:24.191 "is_configured": true, 00:19:24.191 "data_offset": 0, 00:19:24.191 "data_size": 65536 00:19:24.191 }, 00:19:24.191 { 00:19:24.191 "name": "BaseBdev3", 00:19:24.191 "uuid": "c09bdb46-640f-518d-95fc-ec95b09bcf08", 00:19:24.191 "is_configured": true, 00:19:24.191 "data_offset": 0, 00:19:24.191 "data_size": 65536 00:19:24.191 }, 00:19:24.191 { 00:19:24.191 "name": "BaseBdev4", 00:19:24.191 "uuid": "071d6a8b-d6b0-5319-9897-e044bba9a508", 00:19:24.191 "is_configured": true, 00:19:24.191 "data_offset": 0, 00:19:24.191 "data_size": 65536 00:19:24.191 } 00:19:24.191 ] 00:19:24.191 }' 00:19:24.191 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.191 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.450 [2024-10-01 13:53:34.527647] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.450 [2024-10-01 13:53:34.527688] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.450 [2024-10-01 13:53:34.527790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.450 [2024-10-01 13:53:34.527894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.450 [2024-10-01 13:53:34.527909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:24.450 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:24.709 /dev/nbd0 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.709 1+0 records in 00:19:24.709 1+0 records out 00:19:24.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399996 s, 10.2 MB/s 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.709 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:24.710 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.710 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:24.710 13:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:24.710 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:24.710 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:24.710 13:53:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:24.968 /dev/nbd1 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.968 1+0 records in 00:19:24.968 1+0 records out 00:19:24.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507689 s, 8.1 MB/s 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:24.968 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.227 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.486 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84652 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84652 ']' 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84652 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84652 00:19:25.745 killing process with pid 84652 00:19:25.745 Received shutdown signal, test time was about 60.000000 seconds 00:19:25.745 00:19:25.745 Latency(us) 00:19:25.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.745 =================================================================================================================== 00:19:25.745 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84652' 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84652 00:19:25.745 [2024-10-01 13:53:35.904440] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.745 13:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84652 00:19:26.312 [2024-10-01 13:53:36.419541] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:27.688 00:19:27.688 real 0m20.749s 00:19:27.688 user 0m24.650s 00:19:27.688 sys 0m2.746s 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.688 ************************************ 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.688 END TEST raid5f_rebuild_test 00:19:27.688 ************************************ 00:19:27.688 13:53:37 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:27.688 13:53:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:27.688 13:53:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.688 13:53:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.688 ************************************ 00:19:27.688 START TEST raid5f_rebuild_test_sb 00:19:27.688 ************************************ 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:27.688 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:28.023 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85175 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85175 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85175 ']' 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.024 13:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.024 [2024-10-01 13:53:37.991852] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:28.024 [2024-10-01 13:53:37.991979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85175 ] 00:19:28.024 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:28.024 Zero copy mechanism will not be used. 00:19:28.024 [2024-10-01 13:53:38.162211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.283 [2024-10-01 13:53:38.397787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.542 [2024-10-01 13:53:38.614041] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.542 [2024-10-01 13:53:38.614083] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 BaseBdev1_malloc 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 [2024-10-01 13:53:38.872062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:28.801 [2024-10-01 13:53:38.872149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.801 [2024-10-01 13:53:38.872175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:28.801 [2024-10-01 13:53:38.872194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.801 [2024-10-01 13:53:38.874701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.801 [2024-10-01 13:53:38.874745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:28.801 BaseBdev1 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 BaseBdev2_malloc 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 [2024-10-01 13:53:38.943754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:28.801 [2024-10-01 13:53:38.943823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.801 [2024-10-01 13:53:38.943844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:28.801 [2024-10-01 13:53:38.943858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.801 [2024-10-01 13:53:38.946229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.801 [2024-10-01 13:53:38.946274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:28.801 BaseBdev2 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.801 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 BaseBdev3_malloc 00:19:29.059 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:29.059 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 [2024-10-01 13:53:39.000863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:29.059 [2024-10-01 13:53:39.001061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.059 [2024-10-01 13:53:39.001094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:29.059 [2024-10-01 13:53:39.001109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.059 [2024-10-01 13:53:39.003593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.059 [2024-10-01 13:53:39.003641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:29.059 BaseBdev3 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 BaseBdev4_malloc 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 [2024-10-01 13:53:39.063274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:29.059 [2024-10-01 13:53:39.063487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.059 [2024-10-01 13:53:39.063572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:29.059 [2024-10-01 13:53:39.063676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.059 [2024-10-01 13:53:39.066581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.059 BaseBdev4 00:19:29.059 [2024-10-01 13:53:39.066690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 spare_malloc 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 spare_delay 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 [2024-10-01 13:53:39.137778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:29.059 [2024-10-01 13:53:39.137853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.059 [2024-10-01 13:53:39.137884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:29.059 [2024-10-01 13:53:39.137902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.059 [2024-10-01 13:53:39.140760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.059 [2024-10-01 13:53:39.140939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:29.059 spare 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 [2024-10-01 13:53:39.149930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.059 [2024-10-01 13:53:39.152401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.059 [2024-10-01 13:53:39.152637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:29.059 [2024-10-01 13:53:39.152732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:29.059 [2024-10-01 13:53:39.152958] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:29.059 [2024-10-01 13:53:39.152977] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:29.059 [2024-10-01 13:53:39.153300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:29.059 [2024-10-01 13:53:39.161940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:29.059 [2024-10-01 13:53:39.161965] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:29.059 [2024-10-01 13:53:39.162214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.059 "name": "raid_bdev1", 00:19:29.059 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:29.059 "strip_size_kb": 64, 00:19:29.059 "state": "online", 00:19:29.059 "raid_level": "raid5f", 00:19:29.059 "superblock": true, 00:19:29.059 "num_base_bdevs": 4, 00:19:29.059 "num_base_bdevs_discovered": 4, 00:19:29.059 "num_base_bdevs_operational": 4, 00:19:29.059 "base_bdevs_list": [ 00:19:29.059 { 00:19:29.059 "name": "BaseBdev1", 00:19:29.059 "uuid": "37fdbdf8-90bf-5c26-a808-b5e955e2d2f1", 00:19:29.059 "is_configured": true, 00:19:29.059 "data_offset": 2048, 00:19:29.059 "data_size": 63488 00:19:29.059 }, 00:19:29.059 { 00:19:29.059 "name": "BaseBdev2", 00:19:29.059 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:29.059 "is_configured": true, 00:19:29.059 "data_offset": 2048, 00:19:29.059 "data_size": 63488 00:19:29.059 }, 00:19:29.059 { 00:19:29.059 "name": "BaseBdev3", 00:19:29.059 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:29.059 "is_configured": true, 00:19:29.059 "data_offset": 2048, 00:19:29.059 "data_size": 63488 00:19:29.059 }, 00:19:29.059 { 00:19:29.059 "name": "BaseBdev4", 00:19:29.059 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:29.059 "is_configured": true, 00:19:29.059 "data_offset": 2048, 00:19:29.059 "data_size": 63488 00:19:29.059 } 00:19:29.059 ] 00:19:29.059 }' 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.059 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.628 [2024-10-01 13:53:39.582016] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:29.628 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:29.887 [2024-10-01 13:53:39.869484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:29.887 /dev/nbd0 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.887 1+0 records in 00:19:29.887 1+0 records out 00:19:29.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258958 s, 15.8 MB/s 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:29.887 13:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:30.454 496+0 records in 00:19:30.454 496+0 records out 00:19:30.454 97517568 bytes (98 MB, 93 MiB) copied, 0.563083 s, 173 MB/s 00:19:30.454 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:30.454 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.454 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:30.454 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.454 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:30.454 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.454 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:30.713 [2024-10-01 13:53:40.717938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.713 [2024-10-01 13:53:40.768111] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.713 "name": "raid_bdev1", 00:19:30.713 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:30.713 "strip_size_kb": 64, 00:19:30.713 "state": "online", 00:19:30.713 "raid_level": "raid5f", 00:19:30.713 "superblock": true, 00:19:30.713 "num_base_bdevs": 4, 00:19:30.713 "num_base_bdevs_discovered": 3, 00:19:30.713 "num_base_bdevs_operational": 3, 00:19:30.713 "base_bdevs_list": [ 00:19:30.713 { 00:19:30.713 "name": null, 00:19:30.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.713 "is_configured": false, 00:19:30.713 "data_offset": 0, 00:19:30.713 "data_size": 63488 00:19:30.713 }, 00:19:30.713 { 00:19:30.713 "name": "BaseBdev2", 00:19:30.713 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:30.713 "is_configured": true, 00:19:30.713 "data_offset": 2048, 00:19:30.713 "data_size": 63488 00:19:30.713 }, 00:19:30.713 { 00:19:30.713 "name": "BaseBdev3", 00:19:30.713 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:30.713 "is_configured": true, 00:19:30.713 "data_offset": 2048, 00:19:30.713 "data_size": 63488 00:19:30.713 }, 00:19:30.713 { 00:19:30.713 "name": "BaseBdev4", 00:19:30.713 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:30.713 "is_configured": true, 00:19:30.713 "data_offset": 2048, 00:19:30.713 "data_size": 63488 00:19:30.713 } 00:19:30.713 ] 00:19:30.713 }' 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.713 13:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.297 13:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:31.297 13:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.297 13:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.297 [2024-10-01 13:53:41.179634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.297 [2024-10-01 13:53:41.196884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:31.297 13:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.297 13:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:31.297 [2024-10-01 13:53:41.208458] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.232 "name": "raid_bdev1", 00:19:32.232 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:32.232 "strip_size_kb": 64, 00:19:32.232 "state": "online", 00:19:32.232 "raid_level": "raid5f", 00:19:32.232 "superblock": true, 00:19:32.232 "num_base_bdevs": 4, 00:19:32.232 "num_base_bdevs_discovered": 4, 00:19:32.232 "num_base_bdevs_operational": 4, 00:19:32.232 "process": { 00:19:32.232 "type": "rebuild", 00:19:32.232 "target": "spare", 00:19:32.232 "progress": { 00:19:32.232 "blocks": 17280, 00:19:32.232 "percent": 9 00:19:32.232 } 00:19:32.232 }, 00:19:32.232 "base_bdevs_list": [ 00:19:32.232 { 00:19:32.232 "name": "spare", 00:19:32.232 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:32.232 "is_configured": true, 00:19:32.232 "data_offset": 2048, 00:19:32.232 "data_size": 63488 00:19:32.232 }, 00:19:32.232 { 00:19:32.232 "name": "BaseBdev2", 00:19:32.232 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:32.232 "is_configured": true, 00:19:32.232 "data_offset": 2048, 00:19:32.232 "data_size": 63488 00:19:32.232 }, 00:19:32.232 { 00:19:32.232 "name": "BaseBdev3", 00:19:32.232 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:32.232 "is_configured": true, 00:19:32.232 "data_offset": 2048, 00:19:32.232 "data_size": 63488 00:19:32.232 }, 00:19:32.232 { 00:19:32.232 "name": "BaseBdev4", 00:19:32.232 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:32.232 "is_configured": true, 00:19:32.232 "data_offset": 2048, 00:19:32.232 "data_size": 63488 00:19:32.232 } 00:19:32.232 ] 00:19:32.232 }' 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.232 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.233 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.233 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.233 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:32.233 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.233 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.233 [2024-10-01 13:53:42.351755] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.233 [2024-10-01 13:53:42.417826] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:32.233 [2024-10-01 13:53:42.418152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.233 [2024-10-01 13:53:42.418179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.233 [2024-10-01 13:53:42.418198] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.490 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.490 "name": "raid_bdev1", 00:19:32.490 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:32.490 "strip_size_kb": 64, 00:19:32.490 "state": "online", 00:19:32.490 "raid_level": "raid5f", 00:19:32.490 "superblock": true, 00:19:32.490 "num_base_bdevs": 4, 00:19:32.490 "num_base_bdevs_discovered": 3, 00:19:32.490 "num_base_bdevs_operational": 3, 00:19:32.490 "base_bdevs_list": [ 00:19:32.490 { 00:19:32.490 "name": null, 00:19:32.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.491 "is_configured": false, 00:19:32.491 "data_offset": 0, 00:19:32.491 "data_size": 63488 00:19:32.491 }, 00:19:32.491 { 00:19:32.491 "name": "BaseBdev2", 00:19:32.491 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:32.491 "is_configured": true, 00:19:32.491 "data_offset": 2048, 00:19:32.491 "data_size": 63488 00:19:32.491 }, 00:19:32.491 { 00:19:32.491 "name": "BaseBdev3", 00:19:32.491 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:32.491 "is_configured": true, 00:19:32.491 "data_offset": 2048, 00:19:32.491 "data_size": 63488 00:19:32.491 }, 00:19:32.491 { 00:19:32.491 "name": "BaseBdev4", 00:19:32.491 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:32.491 "is_configured": true, 00:19:32.491 "data_offset": 2048, 00:19:32.491 "data_size": 63488 00:19:32.491 } 00:19:32.491 ] 00:19:32.491 }' 00:19:32.491 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.491 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.748 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.006 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.006 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.006 "name": "raid_bdev1", 00:19:33.006 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:33.006 "strip_size_kb": 64, 00:19:33.006 "state": "online", 00:19:33.006 "raid_level": "raid5f", 00:19:33.006 "superblock": true, 00:19:33.006 "num_base_bdevs": 4, 00:19:33.006 "num_base_bdevs_discovered": 3, 00:19:33.006 "num_base_bdevs_operational": 3, 00:19:33.006 "base_bdevs_list": [ 00:19:33.006 { 00:19:33.006 "name": null, 00:19:33.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.006 "is_configured": false, 00:19:33.006 "data_offset": 0, 00:19:33.006 "data_size": 63488 00:19:33.006 }, 00:19:33.006 { 00:19:33.006 "name": "BaseBdev2", 00:19:33.006 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:33.006 "is_configured": true, 00:19:33.006 "data_offset": 2048, 00:19:33.006 "data_size": 63488 00:19:33.006 }, 00:19:33.006 { 00:19:33.006 "name": "BaseBdev3", 00:19:33.007 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:33.007 "is_configured": true, 00:19:33.007 "data_offset": 2048, 00:19:33.007 "data_size": 63488 00:19:33.007 }, 00:19:33.007 { 00:19:33.007 "name": "BaseBdev4", 00:19:33.007 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:33.007 "is_configured": true, 00:19:33.007 "data_offset": 2048, 00:19:33.007 "data_size": 63488 00:19:33.007 } 00:19:33.007 ] 00:19:33.007 }' 00:19:33.007 13:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.007 [2024-10-01 13:53:43.070184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.007 [2024-10-01 13:53:43.087062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.007 13:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:33.007 [2024-10-01 13:53:43.098331] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.942 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.201 "name": "raid_bdev1", 00:19:34.201 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:34.201 "strip_size_kb": 64, 00:19:34.201 "state": "online", 00:19:34.201 "raid_level": "raid5f", 00:19:34.201 "superblock": true, 00:19:34.201 "num_base_bdevs": 4, 00:19:34.201 "num_base_bdevs_discovered": 4, 00:19:34.201 "num_base_bdevs_operational": 4, 00:19:34.201 "process": { 00:19:34.201 "type": "rebuild", 00:19:34.201 "target": "spare", 00:19:34.201 "progress": { 00:19:34.201 "blocks": 17280, 00:19:34.201 "percent": 9 00:19:34.201 } 00:19:34.201 }, 00:19:34.201 "base_bdevs_list": [ 00:19:34.201 { 00:19:34.201 "name": "spare", 00:19:34.201 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:34.201 "is_configured": true, 00:19:34.201 "data_offset": 2048, 00:19:34.201 "data_size": 63488 00:19:34.201 }, 00:19:34.201 { 00:19:34.201 "name": "BaseBdev2", 00:19:34.202 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:34.202 "is_configured": true, 00:19:34.202 "data_offset": 2048, 00:19:34.202 "data_size": 63488 00:19:34.202 }, 00:19:34.202 { 00:19:34.202 "name": "BaseBdev3", 00:19:34.202 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:34.202 "is_configured": true, 00:19:34.202 "data_offset": 2048, 00:19:34.202 "data_size": 63488 00:19:34.202 }, 00:19:34.202 { 00:19:34.202 "name": "BaseBdev4", 00:19:34.202 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:34.202 "is_configured": true, 00:19:34.202 "data_offset": 2048, 00:19:34.202 "data_size": 63488 00:19:34.202 } 00:19:34.202 ] 00:19:34.202 }' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:34.202 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=659 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.202 "name": "raid_bdev1", 00:19:34.202 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:34.202 "strip_size_kb": 64, 00:19:34.202 "state": "online", 00:19:34.202 "raid_level": "raid5f", 00:19:34.202 "superblock": true, 00:19:34.202 "num_base_bdevs": 4, 00:19:34.202 "num_base_bdevs_discovered": 4, 00:19:34.202 "num_base_bdevs_operational": 4, 00:19:34.202 "process": { 00:19:34.202 "type": "rebuild", 00:19:34.202 "target": "spare", 00:19:34.202 "progress": { 00:19:34.202 "blocks": 21120, 00:19:34.202 "percent": 11 00:19:34.202 } 00:19:34.202 }, 00:19:34.202 "base_bdevs_list": [ 00:19:34.202 { 00:19:34.202 "name": "spare", 00:19:34.202 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:34.202 "is_configured": true, 00:19:34.202 "data_offset": 2048, 00:19:34.202 "data_size": 63488 00:19:34.202 }, 00:19:34.202 { 00:19:34.202 "name": "BaseBdev2", 00:19:34.202 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:34.202 "is_configured": true, 00:19:34.202 "data_offset": 2048, 00:19:34.202 "data_size": 63488 00:19:34.202 }, 00:19:34.202 { 00:19:34.202 "name": "BaseBdev3", 00:19:34.202 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:34.202 "is_configured": true, 00:19:34.202 "data_offset": 2048, 00:19:34.202 "data_size": 63488 00:19:34.202 }, 00:19:34.202 { 00:19:34.202 "name": "BaseBdev4", 00:19:34.202 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:34.202 "is_configured": true, 00:19:34.202 "data_offset": 2048, 00:19:34.202 "data_size": 63488 00:19:34.202 } 00:19:34.202 ] 00:19:34.202 }' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.202 13:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.580 "name": "raid_bdev1", 00:19:35.580 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:35.580 "strip_size_kb": 64, 00:19:35.580 "state": "online", 00:19:35.580 "raid_level": "raid5f", 00:19:35.580 "superblock": true, 00:19:35.580 "num_base_bdevs": 4, 00:19:35.580 "num_base_bdevs_discovered": 4, 00:19:35.580 "num_base_bdevs_operational": 4, 00:19:35.580 "process": { 00:19:35.580 "type": "rebuild", 00:19:35.580 "target": "spare", 00:19:35.580 "progress": { 00:19:35.580 "blocks": 42240, 00:19:35.580 "percent": 22 00:19:35.580 } 00:19:35.580 }, 00:19:35.580 "base_bdevs_list": [ 00:19:35.580 { 00:19:35.580 "name": "spare", 00:19:35.580 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:35.580 "is_configured": true, 00:19:35.580 "data_offset": 2048, 00:19:35.580 "data_size": 63488 00:19:35.580 }, 00:19:35.580 { 00:19:35.580 "name": "BaseBdev2", 00:19:35.580 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:35.580 "is_configured": true, 00:19:35.580 "data_offset": 2048, 00:19:35.580 "data_size": 63488 00:19:35.580 }, 00:19:35.580 { 00:19:35.580 "name": "BaseBdev3", 00:19:35.580 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:35.580 "is_configured": true, 00:19:35.580 "data_offset": 2048, 00:19:35.580 "data_size": 63488 00:19:35.580 }, 00:19:35.580 { 00:19:35.580 "name": "BaseBdev4", 00:19:35.580 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:35.580 "is_configured": true, 00:19:35.580 "data_offset": 2048, 00:19:35.580 "data_size": 63488 00:19:35.580 } 00:19:35.580 ] 00:19:35.580 }' 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.580 13:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.515 "name": "raid_bdev1", 00:19:36.515 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:36.515 "strip_size_kb": 64, 00:19:36.515 "state": "online", 00:19:36.515 "raid_level": "raid5f", 00:19:36.515 "superblock": true, 00:19:36.515 "num_base_bdevs": 4, 00:19:36.515 "num_base_bdevs_discovered": 4, 00:19:36.515 "num_base_bdevs_operational": 4, 00:19:36.515 "process": { 00:19:36.515 "type": "rebuild", 00:19:36.515 "target": "spare", 00:19:36.515 "progress": { 00:19:36.515 "blocks": 65280, 00:19:36.515 "percent": 34 00:19:36.515 } 00:19:36.515 }, 00:19:36.515 "base_bdevs_list": [ 00:19:36.515 { 00:19:36.515 "name": "spare", 00:19:36.515 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:36.515 "is_configured": true, 00:19:36.515 "data_offset": 2048, 00:19:36.515 "data_size": 63488 00:19:36.515 }, 00:19:36.515 { 00:19:36.515 "name": "BaseBdev2", 00:19:36.515 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:36.515 "is_configured": true, 00:19:36.515 "data_offset": 2048, 00:19:36.515 "data_size": 63488 00:19:36.515 }, 00:19:36.515 { 00:19:36.515 "name": "BaseBdev3", 00:19:36.515 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:36.515 "is_configured": true, 00:19:36.515 "data_offset": 2048, 00:19:36.515 "data_size": 63488 00:19:36.515 }, 00:19:36.515 { 00:19:36.515 "name": "BaseBdev4", 00:19:36.515 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:36.515 "is_configured": true, 00:19:36.515 "data_offset": 2048, 00:19:36.515 "data_size": 63488 00:19:36.515 } 00:19:36.515 ] 00:19:36.515 }' 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.515 13:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.892 "name": "raid_bdev1", 00:19:37.892 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:37.892 "strip_size_kb": 64, 00:19:37.892 "state": "online", 00:19:37.892 "raid_level": "raid5f", 00:19:37.892 "superblock": true, 00:19:37.892 "num_base_bdevs": 4, 00:19:37.892 "num_base_bdevs_discovered": 4, 00:19:37.892 "num_base_bdevs_operational": 4, 00:19:37.892 "process": { 00:19:37.892 "type": "rebuild", 00:19:37.892 "target": "spare", 00:19:37.892 "progress": { 00:19:37.892 "blocks": 86400, 00:19:37.892 "percent": 45 00:19:37.892 } 00:19:37.892 }, 00:19:37.892 "base_bdevs_list": [ 00:19:37.892 { 00:19:37.892 "name": "spare", 00:19:37.892 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:37.892 "is_configured": true, 00:19:37.892 "data_offset": 2048, 00:19:37.892 "data_size": 63488 00:19:37.892 }, 00:19:37.892 { 00:19:37.892 "name": "BaseBdev2", 00:19:37.892 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:37.892 "is_configured": true, 00:19:37.892 "data_offset": 2048, 00:19:37.892 "data_size": 63488 00:19:37.892 }, 00:19:37.892 { 00:19:37.892 "name": "BaseBdev3", 00:19:37.892 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:37.892 "is_configured": true, 00:19:37.892 "data_offset": 2048, 00:19:37.892 "data_size": 63488 00:19:37.892 }, 00:19:37.892 { 00:19:37.892 "name": "BaseBdev4", 00:19:37.892 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:37.892 "is_configured": true, 00:19:37.892 "data_offset": 2048, 00:19:37.892 "data_size": 63488 00:19:37.892 } 00:19:37.892 ] 00:19:37.892 }' 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.892 13:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.826 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.826 "name": "raid_bdev1", 00:19:38.826 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:38.826 "strip_size_kb": 64, 00:19:38.826 "state": "online", 00:19:38.826 "raid_level": "raid5f", 00:19:38.826 "superblock": true, 00:19:38.826 "num_base_bdevs": 4, 00:19:38.826 "num_base_bdevs_discovered": 4, 00:19:38.826 "num_base_bdevs_operational": 4, 00:19:38.826 "process": { 00:19:38.826 "type": "rebuild", 00:19:38.827 "target": "spare", 00:19:38.827 "progress": { 00:19:38.827 "blocks": 109440, 00:19:38.827 "percent": 57 00:19:38.827 } 00:19:38.827 }, 00:19:38.827 "base_bdevs_list": [ 00:19:38.827 { 00:19:38.827 "name": "spare", 00:19:38.827 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:38.827 "is_configured": true, 00:19:38.827 "data_offset": 2048, 00:19:38.827 "data_size": 63488 00:19:38.827 }, 00:19:38.827 { 00:19:38.827 "name": "BaseBdev2", 00:19:38.827 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:38.827 "is_configured": true, 00:19:38.827 "data_offset": 2048, 00:19:38.827 "data_size": 63488 00:19:38.827 }, 00:19:38.827 { 00:19:38.827 "name": "BaseBdev3", 00:19:38.827 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:38.827 "is_configured": true, 00:19:38.827 "data_offset": 2048, 00:19:38.827 "data_size": 63488 00:19:38.827 }, 00:19:38.827 { 00:19:38.827 "name": "BaseBdev4", 00:19:38.827 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:38.827 "is_configured": true, 00:19:38.827 "data_offset": 2048, 00:19:38.827 "data_size": 63488 00:19:38.827 } 00:19:38.827 ] 00:19:38.827 }' 00:19:38.827 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.827 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.827 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.827 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.827 13:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.237 13:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.237 13:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.237 13:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.237 "name": "raid_bdev1", 00:19:40.237 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:40.237 "strip_size_kb": 64, 00:19:40.237 "state": "online", 00:19:40.237 "raid_level": "raid5f", 00:19:40.237 "superblock": true, 00:19:40.237 "num_base_bdevs": 4, 00:19:40.237 "num_base_bdevs_discovered": 4, 00:19:40.237 "num_base_bdevs_operational": 4, 00:19:40.237 "process": { 00:19:40.237 "type": "rebuild", 00:19:40.237 "target": "spare", 00:19:40.237 "progress": { 00:19:40.238 "blocks": 130560, 00:19:40.238 "percent": 68 00:19:40.238 } 00:19:40.238 }, 00:19:40.238 "base_bdevs_list": [ 00:19:40.238 { 00:19:40.238 "name": "spare", 00:19:40.238 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:40.238 "is_configured": true, 00:19:40.238 "data_offset": 2048, 00:19:40.238 "data_size": 63488 00:19:40.238 }, 00:19:40.238 { 00:19:40.238 "name": "BaseBdev2", 00:19:40.238 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:40.238 "is_configured": true, 00:19:40.238 "data_offset": 2048, 00:19:40.238 "data_size": 63488 00:19:40.238 }, 00:19:40.238 { 00:19:40.238 "name": "BaseBdev3", 00:19:40.238 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:40.238 "is_configured": true, 00:19:40.238 "data_offset": 2048, 00:19:40.238 "data_size": 63488 00:19:40.238 }, 00:19:40.238 { 00:19:40.238 "name": "BaseBdev4", 00:19:40.238 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:40.238 "is_configured": true, 00:19:40.238 "data_offset": 2048, 00:19:40.238 "data_size": 63488 00:19:40.238 } 00:19:40.238 ] 00:19:40.238 }' 00:19:40.238 13:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.238 13:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.238 13:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.238 13:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.238 13:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.173 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.173 "name": "raid_bdev1", 00:19:41.173 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:41.174 "strip_size_kb": 64, 00:19:41.174 "state": "online", 00:19:41.174 "raid_level": "raid5f", 00:19:41.174 "superblock": true, 00:19:41.174 "num_base_bdevs": 4, 00:19:41.174 "num_base_bdevs_discovered": 4, 00:19:41.174 "num_base_bdevs_operational": 4, 00:19:41.174 "process": { 00:19:41.174 "type": "rebuild", 00:19:41.174 "target": "spare", 00:19:41.174 "progress": { 00:19:41.174 "blocks": 151680, 00:19:41.174 "percent": 79 00:19:41.174 } 00:19:41.174 }, 00:19:41.174 "base_bdevs_list": [ 00:19:41.174 { 00:19:41.174 "name": "spare", 00:19:41.174 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:41.174 "is_configured": true, 00:19:41.174 "data_offset": 2048, 00:19:41.174 "data_size": 63488 00:19:41.174 }, 00:19:41.174 { 00:19:41.174 "name": "BaseBdev2", 00:19:41.174 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:41.174 "is_configured": true, 00:19:41.174 "data_offset": 2048, 00:19:41.174 "data_size": 63488 00:19:41.174 }, 00:19:41.174 { 00:19:41.174 "name": "BaseBdev3", 00:19:41.174 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:41.174 "is_configured": true, 00:19:41.174 "data_offset": 2048, 00:19:41.174 "data_size": 63488 00:19:41.174 }, 00:19:41.174 { 00:19:41.174 "name": "BaseBdev4", 00:19:41.174 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:41.174 "is_configured": true, 00:19:41.174 "data_offset": 2048, 00:19:41.174 "data_size": 63488 00:19:41.174 } 00:19:41.174 ] 00:19:41.174 }' 00:19:41.174 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.174 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.174 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.174 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.174 13:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.110 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.369 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.369 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.369 "name": "raid_bdev1", 00:19:42.369 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:42.369 "strip_size_kb": 64, 00:19:42.369 "state": "online", 00:19:42.369 "raid_level": "raid5f", 00:19:42.369 "superblock": true, 00:19:42.369 "num_base_bdevs": 4, 00:19:42.369 "num_base_bdevs_discovered": 4, 00:19:42.369 "num_base_bdevs_operational": 4, 00:19:42.369 "process": { 00:19:42.369 "type": "rebuild", 00:19:42.369 "target": "spare", 00:19:42.369 "progress": { 00:19:42.369 "blocks": 174720, 00:19:42.369 "percent": 91 00:19:42.369 } 00:19:42.369 }, 00:19:42.369 "base_bdevs_list": [ 00:19:42.369 { 00:19:42.369 "name": "spare", 00:19:42.369 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:42.369 "is_configured": true, 00:19:42.369 "data_offset": 2048, 00:19:42.369 "data_size": 63488 00:19:42.369 }, 00:19:42.369 { 00:19:42.369 "name": "BaseBdev2", 00:19:42.369 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:42.369 "is_configured": true, 00:19:42.369 "data_offset": 2048, 00:19:42.370 "data_size": 63488 00:19:42.370 }, 00:19:42.370 { 00:19:42.370 "name": "BaseBdev3", 00:19:42.370 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:42.370 "is_configured": true, 00:19:42.370 "data_offset": 2048, 00:19:42.370 "data_size": 63488 00:19:42.370 }, 00:19:42.370 { 00:19:42.370 "name": "BaseBdev4", 00:19:42.370 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:42.370 "is_configured": true, 00:19:42.370 "data_offset": 2048, 00:19:42.370 "data_size": 63488 00:19:42.370 } 00:19:42.370 ] 00:19:42.370 }' 00:19:42.370 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.370 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.370 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.370 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.370 13:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:43.307 [2024-10-01 13:53:53.174405] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:43.307 [2024-10-01 13:53:53.174499] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:43.307 [2024-10-01 13:53:53.174661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.307 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.307 "name": "raid_bdev1", 00:19:43.307 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:43.307 "strip_size_kb": 64, 00:19:43.307 "state": "online", 00:19:43.307 "raid_level": "raid5f", 00:19:43.307 "superblock": true, 00:19:43.307 "num_base_bdevs": 4, 00:19:43.307 "num_base_bdevs_discovered": 4, 00:19:43.307 "num_base_bdevs_operational": 4, 00:19:43.307 "base_bdevs_list": [ 00:19:43.307 { 00:19:43.307 "name": "spare", 00:19:43.307 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:43.307 "is_configured": true, 00:19:43.307 "data_offset": 2048, 00:19:43.307 "data_size": 63488 00:19:43.307 }, 00:19:43.307 { 00:19:43.307 "name": "BaseBdev2", 00:19:43.307 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:43.307 "is_configured": true, 00:19:43.307 "data_offset": 2048, 00:19:43.307 "data_size": 63488 00:19:43.307 }, 00:19:43.307 { 00:19:43.307 "name": "BaseBdev3", 00:19:43.307 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:43.307 "is_configured": true, 00:19:43.307 "data_offset": 2048, 00:19:43.307 "data_size": 63488 00:19:43.307 }, 00:19:43.307 { 00:19:43.307 "name": "BaseBdev4", 00:19:43.307 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:43.307 "is_configured": true, 00:19:43.308 "data_offset": 2048, 00:19:43.308 "data_size": 63488 00:19:43.308 } 00:19:43.308 ] 00:19:43.308 }' 00:19:43.308 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.566 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.567 "name": "raid_bdev1", 00:19:43.567 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:43.567 "strip_size_kb": 64, 00:19:43.567 "state": "online", 00:19:43.567 "raid_level": "raid5f", 00:19:43.567 "superblock": true, 00:19:43.567 "num_base_bdevs": 4, 00:19:43.567 "num_base_bdevs_discovered": 4, 00:19:43.567 "num_base_bdevs_operational": 4, 00:19:43.567 "base_bdevs_list": [ 00:19:43.567 { 00:19:43.567 "name": "spare", 00:19:43.567 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:43.567 "is_configured": true, 00:19:43.567 "data_offset": 2048, 00:19:43.567 "data_size": 63488 00:19:43.567 }, 00:19:43.567 { 00:19:43.567 "name": "BaseBdev2", 00:19:43.567 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:43.567 "is_configured": true, 00:19:43.567 "data_offset": 2048, 00:19:43.567 "data_size": 63488 00:19:43.567 }, 00:19:43.567 { 00:19:43.567 "name": "BaseBdev3", 00:19:43.567 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:43.567 "is_configured": true, 00:19:43.567 "data_offset": 2048, 00:19:43.567 "data_size": 63488 00:19:43.567 }, 00:19:43.567 { 00:19:43.567 "name": "BaseBdev4", 00:19:43.567 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:43.567 "is_configured": true, 00:19:43.567 "data_offset": 2048, 00:19:43.567 "data_size": 63488 00:19:43.567 } 00:19:43.567 ] 00:19:43.567 }' 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.567 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.825 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.825 "name": "raid_bdev1", 00:19:43.825 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:43.825 "strip_size_kb": 64, 00:19:43.825 "state": "online", 00:19:43.825 "raid_level": "raid5f", 00:19:43.825 "superblock": true, 00:19:43.825 "num_base_bdevs": 4, 00:19:43.825 "num_base_bdevs_discovered": 4, 00:19:43.825 "num_base_bdevs_operational": 4, 00:19:43.825 "base_bdevs_list": [ 00:19:43.825 { 00:19:43.825 "name": "spare", 00:19:43.825 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:43.825 "is_configured": true, 00:19:43.825 "data_offset": 2048, 00:19:43.825 "data_size": 63488 00:19:43.825 }, 00:19:43.825 { 00:19:43.825 "name": "BaseBdev2", 00:19:43.825 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:43.825 "is_configured": true, 00:19:43.825 "data_offset": 2048, 00:19:43.825 "data_size": 63488 00:19:43.825 }, 00:19:43.825 { 00:19:43.825 "name": "BaseBdev3", 00:19:43.825 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:43.825 "is_configured": true, 00:19:43.825 "data_offset": 2048, 00:19:43.825 "data_size": 63488 00:19:43.825 }, 00:19:43.825 { 00:19:43.825 "name": "BaseBdev4", 00:19:43.825 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:43.825 "is_configured": true, 00:19:43.825 "data_offset": 2048, 00:19:43.825 "data_size": 63488 00:19:43.825 } 00:19:43.825 ] 00:19:43.825 }' 00:19:43.825 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.825 13:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.084 [2024-10-01 13:53:54.187649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.084 [2024-10-01 13:53:54.187693] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.084 [2024-10-01 13:53:54.187788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.084 [2024-10-01 13:53:54.187898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.084 [2024-10-01 13:53:54.187918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:44.084 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:44.342 /dev/nbd0 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.342 1+0 records in 00:19:44.342 1+0 records out 00:19:44.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401244 s, 10.2 MB/s 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:44.342 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:44.600 /dev/nbd1 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:44.600 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.878 1+0 records in 00:19:44.878 1+0 records out 00:19:44.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053171 s, 7.7 MB/s 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:44.878 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:44.879 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.879 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:44.879 13:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:44.879 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:44.879 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.879 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:44.879 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:44.879 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:44.879 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.879 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:45.137 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:45.137 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:45.137 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:45.137 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.137 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.138 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:45.138 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:45.138 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.138 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.138 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.397 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.397 [2024-10-01 13:53:55.543525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:45.397 [2024-10-01 13:53:55.543596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.397 [2024-10-01 13:53:55.543627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:45.398 [2024-10-01 13:53:55.543642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.398 [2024-10-01 13:53:55.546537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.398 [2024-10-01 13:53:55.546579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:45.398 [2024-10-01 13:53:55.546691] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:45.398 [2024-10-01 13:53:55.546756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:45.398 [2024-10-01 13:53:55.546919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.398 [2024-10-01 13:53:55.547021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:45.398 [2024-10-01 13:53:55.547122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:45.398 spare 00:19:45.398 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.398 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:45.398 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.398 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.656 [2024-10-01 13:53:55.647073] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:45.656 [2024-10-01 13:53:55.647383] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:45.656 [2024-10-01 13:53:55.647889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:45.656 [2024-10-01 13:53:55.656169] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:45.656 [2024-10-01 13:53:55.656348] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:45.656 [2024-10-01 13:53:55.656751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.656 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.656 "name": "raid_bdev1", 00:19:45.656 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:45.656 "strip_size_kb": 64, 00:19:45.656 "state": "online", 00:19:45.656 "raid_level": "raid5f", 00:19:45.656 "superblock": true, 00:19:45.656 "num_base_bdevs": 4, 00:19:45.656 "num_base_bdevs_discovered": 4, 00:19:45.656 "num_base_bdevs_operational": 4, 00:19:45.656 "base_bdevs_list": [ 00:19:45.656 { 00:19:45.656 "name": "spare", 00:19:45.656 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:45.656 "is_configured": true, 00:19:45.656 "data_offset": 2048, 00:19:45.656 "data_size": 63488 00:19:45.656 }, 00:19:45.657 { 00:19:45.657 "name": "BaseBdev2", 00:19:45.657 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:45.657 "is_configured": true, 00:19:45.657 "data_offset": 2048, 00:19:45.657 "data_size": 63488 00:19:45.657 }, 00:19:45.657 { 00:19:45.657 "name": "BaseBdev3", 00:19:45.657 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:45.657 "is_configured": true, 00:19:45.657 "data_offset": 2048, 00:19:45.657 "data_size": 63488 00:19:45.657 }, 00:19:45.657 { 00:19:45.657 "name": "BaseBdev4", 00:19:45.657 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:45.657 "is_configured": true, 00:19:45.657 "data_offset": 2048, 00:19:45.657 "data_size": 63488 00:19:45.657 } 00:19:45.657 ] 00:19:45.657 }' 00:19:45.657 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.657 13:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.225 "name": "raid_bdev1", 00:19:46.225 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:46.225 "strip_size_kb": 64, 00:19:46.225 "state": "online", 00:19:46.225 "raid_level": "raid5f", 00:19:46.225 "superblock": true, 00:19:46.225 "num_base_bdevs": 4, 00:19:46.225 "num_base_bdevs_discovered": 4, 00:19:46.225 "num_base_bdevs_operational": 4, 00:19:46.225 "base_bdevs_list": [ 00:19:46.225 { 00:19:46.225 "name": "spare", 00:19:46.225 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:46.225 "is_configured": true, 00:19:46.225 "data_offset": 2048, 00:19:46.225 "data_size": 63488 00:19:46.225 }, 00:19:46.225 { 00:19:46.225 "name": "BaseBdev2", 00:19:46.225 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:46.225 "is_configured": true, 00:19:46.225 "data_offset": 2048, 00:19:46.225 "data_size": 63488 00:19:46.225 }, 00:19:46.225 { 00:19:46.225 "name": "BaseBdev3", 00:19:46.225 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:46.225 "is_configured": true, 00:19:46.225 "data_offset": 2048, 00:19:46.225 "data_size": 63488 00:19:46.225 }, 00:19:46.225 { 00:19:46.225 "name": "BaseBdev4", 00:19:46.225 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:46.225 "is_configured": true, 00:19:46.225 "data_offset": 2048, 00:19:46.225 "data_size": 63488 00:19:46.225 } 00:19:46.225 ] 00:19:46.225 }' 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.225 [2024-10-01 13:53:56.306142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.225 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.226 "name": "raid_bdev1", 00:19:46.226 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:46.226 "strip_size_kb": 64, 00:19:46.226 "state": "online", 00:19:46.226 "raid_level": "raid5f", 00:19:46.226 "superblock": true, 00:19:46.226 "num_base_bdevs": 4, 00:19:46.226 "num_base_bdevs_discovered": 3, 00:19:46.226 "num_base_bdevs_operational": 3, 00:19:46.226 "base_bdevs_list": [ 00:19:46.226 { 00:19:46.226 "name": null, 00:19:46.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.226 "is_configured": false, 00:19:46.226 "data_offset": 0, 00:19:46.226 "data_size": 63488 00:19:46.226 }, 00:19:46.226 { 00:19:46.226 "name": "BaseBdev2", 00:19:46.226 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:46.226 "is_configured": true, 00:19:46.226 "data_offset": 2048, 00:19:46.226 "data_size": 63488 00:19:46.226 }, 00:19:46.226 { 00:19:46.226 "name": "BaseBdev3", 00:19:46.226 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:46.226 "is_configured": true, 00:19:46.226 "data_offset": 2048, 00:19:46.226 "data_size": 63488 00:19:46.226 }, 00:19:46.226 { 00:19:46.226 "name": "BaseBdev4", 00:19:46.226 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:46.226 "is_configured": true, 00:19:46.226 "data_offset": 2048, 00:19:46.226 "data_size": 63488 00:19:46.226 } 00:19:46.226 ] 00:19:46.226 }' 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.226 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.793 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.793 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.793 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.793 [2024-10-01 13:53:56.797513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.793 [2024-10-01 13:53:56.797712] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:46.793 [2024-10-01 13:53:56.797735] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:46.793 [2024-10-01 13:53:56.797783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.793 [2024-10-01 13:53:56.813811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:46.793 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.793 13:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:46.793 [2024-10-01 13:53:56.824408] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.728 "name": "raid_bdev1", 00:19:47.728 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:47.728 "strip_size_kb": 64, 00:19:47.728 "state": "online", 00:19:47.728 "raid_level": "raid5f", 00:19:47.728 "superblock": true, 00:19:47.728 "num_base_bdevs": 4, 00:19:47.728 "num_base_bdevs_discovered": 4, 00:19:47.728 "num_base_bdevs_operational": 4, 00:19:47.728 "process": { 00:19:47.728 "type": "rebuild", 00:19:47.728 "target": "spare", 00:19:47.728 "progress": { 00:19:47.728 "blocks": 17280, 00:19:47.728 "percent": 9 00:19:47.728 } 00:19:47.728 }, 00:19:47.728 "base_bdevs_list": [ 00:19:47.728 { 00:19:47.728 "name": "spare", 00:19:47.728 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:47.728 "is_configured": true, 00:19:47.728 "data_offset": 2048, 00:19:47.728 "data_size": 63488 00:19:47.728 }, 00:19:47.728 { 00:19:47.728 "name": "BaseBdev2", 00:19:47.728 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:47.728 "is_configured": true, 00:19:47.728 "data_offset": 2048, 00:19:47.728 "data_size": 63488 00:19:47.728 }, 00:19:47.728 { 00:19:47.728 "name": "BaseBdev3", 00:19:47.728 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:47.728 "is_configured": true, 00:19:47.728 "data_offset": 2048, 00:19:47.728 "data_size": 63488 00:19:47.728 }, 00:19:47.728 { 00:19:47.728 "name": "BaseBdev4", 00:19:47.728 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:47.728 "is_configured": true, 00:19:47.728 "data_offset": 2048, 00:19:47.728 "data_size": 63488 00:19:47.728 } 00:19:47.728 ] 00:19:47.728 }' 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.728 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.986 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.987 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:47.987 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.987 13:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.987 [2024-10-01 13:53:57.943849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.987 [2024-10-01 13:53:58.033630] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:47.987 [2024-10-01 13:53:58.033741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.987 [2024-10-01 13:53:58.033762] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.987 [2024-10-01 13:53:58.033779] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.987 "name": "raid_bdev1", 00:19:47.987 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:47.987 "strip_size_kb": 64, 00:19:47.987 "state": "online", 00:19:47.987 "raid_level": "raid5f", 00:19:47.987 "superblock": true, 00:19:47.987 "num_base_bdevs": 4, 00:19:47.987 "num_base_bdevs_discovered": 3, 00:19:47.987 "num_base_bdevs_operational": 3, 00:19:47.987 "base_bdevs_list": [ 00:19:47.987 { 00:19:47.987 "name": null, 00:19:47.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.987 "is_configured": false, 00:19:47.987 "data_offset": 0, 00:19:47.987 "data_size": 63488 00:19:47.987 }, 00:19:47.987 { 00:19:47.987 "name": "BaseBdev2", 00:19:47.987 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:47.987 "is_configured": true, 00:19:47.987 "data_offset": 2048, 00:19:47.987 "data_size": 63488 00:19:47.987 }, 00:19:47.987 { 00:19:47.987 "name": "BaseBdev3", 00:19:47.987 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:47.987 "is_configured": true, 00:19:47.987 "data_offset": 2048, 00:19:47.987 "data_size": 63488 00:19:47.987 }, 00:19:47.987 { 00:19:47.987 "name": "BaseBdev4", 00:19:47.987 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:47.987 "is_configured": true, 00:19:47.987 "data_offset": 2048, 00:19:47.987 "data_size": 63488 00:19:47.987 } 00:19:47.987 ] 00:19:47.987 }' 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.987 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.552 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:48.552 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.552 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.552 [2024-10-01 13:53:58.557961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:48.552 [2024-10-01 13:53:58.558060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.552 [2024-10-01 13:53:58.558113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:48.552 [2024-10-01 13:53:58.558130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.552 [2024-10-01 13:53:58.558771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.552 [2024-10-01 13:53:58.558849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:48.552 [2024-10-01 13:53:58.558963] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:48.552 [2024-10-01 13:53:58.558982] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:48.552 [2024-10-01 13:53:58.558997] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:48.552 [2024-10-01 13:53:58.559028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.552 [2024-10-01 13:53:58.575494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:48.552 spare 00:19:48.552 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.552 13:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:48.552 [2024-10-01 13:53:58.586304] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.486 "name": "raid_bdev1", 00:19:49.486 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:49.486 "strip_size_kb": 64, 00:19:49.486 "state": "online", 00:19:49.486 "raid_level": "raid5f", 00:19:49.486 "superblock": true, 00:19:49.486 "num_base_bdevs": 4, 00:19:49.486 "num_base_bdevs_discovered": 4, 00:19:49.486 "num_base_bdevs_operational": 4, 00:19:49.486 "process": { 00:19:49.486 "type": "rebuild", 00:19:49.486 "target": "spare", 00:19:49.486 "progress": { 00:19:49.486 "blocks": 17280, 00:19:49.486 "percent": 9 00:19:49.486 } 00:19:49.486 }, 00:19:49.486 "base_bdevs_list": [ 00:19:49.486 { 00:19:49.486 "name": "spare", 00:19:49.486 "uuid": "8367ba1b-2102-52fd-b094-54e52cf404f2", 00:19:49.486 "is_configured": true, 00:19:49.486 "data_offset": 2048, 00:19:49.486 "data_size": 63488 00:19:49.486 }, 00:19:49.486 { 00:19:49.486 "name": "BaseBdev2", 00:19:49.486 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:49.486 "is_configured": true, 00:19:49.486 "data_offset": 2048, 00:19:49.486 "data_size": 63488 00:19:49.486 }, 00:19:49.486 { 00:19:49.486 "name": "BaseBdev3", 00:19:49.486 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:49.486 "is_configured": true, 00:19:49.486 "data_offset": 2048, 00:19:49.486 "data_size": 63488 00:19:49.486 }, 00:19:49.486 { 00:19:49.486 "name": "BaseBdev4", 00:19:49.486 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:49.486 "is_configured": true, 00:19:49.486 "data_offset": 2048, 00:19:49.486 "data_size": 63488 00:19:49.486 } 00:19:49.486 ] 00:19:49.486 }' 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.486 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.744 [2024-10-01 13:53:59.809982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.744 [2024-10-01 13:53:59.896921] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.744 [2024-10-01 13:53:59.897290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.744 [2024-10-01 13:53:59.897429] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.744 [2024-10-01 13:53:59.897541] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.744 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.002 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.002 "name": "raid_bdev1", 00:19:50.002 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:50.002 "strip_size_kb": 64, 00:19:50.002 "state": "online", 00:19:50.002 "raid_level": "raid5f", 00:19:50.002 "superblock": true, 00:19:50.002 "num_base_bdevs": 4, 00:19:50.002 "num_base_bdevs_discovered": 3, 00:19:50.002 "num_base_bdevs_operational": 3, 00:19:50.002 "base_bdevs_list": [ 00:19:50.002 { 00:19:50.002 "name": null, 00:19:50.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.002 "is_configured": false, 00:19:50.002 "data_offset": 0, 00:19:50.002 "data_size": 63488 00:19:50.002 }, 00:19:50.002 { 00:19:50.003 "name": "BaseBdev2", 00:19:50.003 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:50.003 "is_configured": true, 00:19:50.003 "data_offset": 2048, 00:19:50.003 "data_size": 63488 00:19:50.003 }, 00:19:50.003 { 00:19:50.003 "name": "BaseBdev3", 00:19:50.003 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:50.003 "is_configured": true, 00:19:50.003 "data_offset": 2048, 00:19:50.003 "data_size": 63488 00:19:50.003 }, 00:19:50.003 { 00:19:50.003 "name": "BaseBdev4", 00:19:50.003 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:50.003 "is_configured": true, 00:19:50.003 "data_offset": 2048, 00:19:50.003 "data_size": 63488 00:19:50.003 } 00:19:50.003 ] 00:19:50.003 }' 00:19:50.003 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.003 13:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.260 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.260 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.260 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.260 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.260 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.260 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.260 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.261 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.261 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.261 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.261 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.261 "name": "raid_bdev1", 00:19:50.261 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:50.261 "strip_size_kb": 64, 00:19:50.261 "state": "online", 00:19:50.261 "raid_level": "raid5f", 00:19:50.261 "superblock": true, 00:19:50.261 "num_base_bdevs": 4, 00:19:50.261 "num_base_bdevs_discovered": 3, 00:19:50.261 "num_base_bdevs_operational": 3, 00:19:50.261 "base_bdevs_list": [ 00:19:50.261 { 00:19:50.261 "name": null, 00:19:50.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.261 "is_configured": false, 00:19:50.261 "data_offset": 0, 00:19:50.261 "data_size": 63488 00:19:50.261 }, 00:19:50.261 { 00:19:50.261 "name": "BaseBdev2", 00:19:50.261 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:50.261 "is_configured": true, 00:19:50.261 "data_offset": 2048, 00:19:50.261 "data_size": 63488 00:19:50.261 }, 00:19:50.261 { 00:19:50.261 "name": "BaseBdev3", 00:19:50.261 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:50.261 "is_configured": true, 00:19:50.261 "data_offset": 2048, 00:19:50.261 "data_size": 63488 00:19:50.261 }, 00:19:50.261 { 00:19:50.261 "name": "BaseBdev4", 00:19:50.261 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:50.261 "is_configured": true, 00:19:50.261 "data_offset": 2048, 00:19:50.261 "data_size": 63488 00:19:50.261 } 00:19:50.261 ] 00:19:50.261 }' 00:19:50.261 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.520 [2024-10-01 13:54:00.539590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:50.520 [2024-10-01 13:54:00.539795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.520 [2024-10-01 13:54:00.539919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:50.520 [2024-10-01 13:54:00.540014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.520 [2024-10-01 13:54:00.540618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.520 [2024-10-01 13:54:00.540776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:50.520 [2024-10-01 13:54:00.540972] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:50.520 [2024-10-01 13:54:00.541090] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:50.520 [2024-10-01 13:54:00.541231] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:50.520 [2024-10-01 13:54:00.541310] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:50.520 BaseBdev1 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.520 13:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.457 "name": "raid_bdev1", 00:19:51.457 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:51.457 "strip_size_kb": 64, 00:19:51.457 "state": "online", 00:19:51.457 "raid_level": "raid5f", 00:19:51.457 "superblock": true, 00:19:51.457 "num_base_bdevs": 4, 00:19:51.457 "num_base_bdevs_discovered": 3, 00:19:51.457 "num_base_bdevs_operational": 3, 00:19:51.457 "base_bdevs_list": [ 00:19:51.457 { 00:19:51.457 "name": null, 00:19:51.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.457 "is_configured": false, 00:19:51.457 "data_offset": 0, 00:19:51.457 "data_size": 63488 00:19:51.457 }, 00:19:51.457 { 00:19:51.457 "name": "BaseBdev2", 00:19:51.457 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:51.457 "is_configured": true, 00:19:51.457 "data_offset": 2048, 00:19:51.457 "data_size": 63488 00:19:51.457 }, 00:19:51.457 { 00:19:51.457 "name": "BaseBdev3", 00:19:51.457 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:51.457 "is_configured": true, 00:19:51.457 "data_offset": 2048, 00:19:51.457 "data_size": 63488 00:19:51.457 }, 00:19:51.457 { 00:19:51.457 "name": "BaseBdev4", 00:19:51.457 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:51.457 "is_configured": true, 00:19:51.457 "data_offset": 2048, 00:19:51.457 "data_size": 63488 00:19:51.457 } 00:19:51.457 ] 00:19:51.457 }' 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.457 13:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.035 "name": "raid_bdev1", 00:19:52.035 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:52.035 "strip_size_kb": 64, 00:19:52.035 "state": "online", 00:19:52.035 "raid_level": "raid5f", 00:19:52.035 "superblock": true, 00:19:52.035 "num_base_bdevs": 4, 00:19:52.035 "num_base_bdevs_discovered": 3, 00:19:52.035 "num_base_bdevs_operational": 3, 00:19:52.035 "base_bdevs_list": [ 00:19:52.035 { 00:19:52.035 "name": null, 00:19:52.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.035 "is_configured": false, 00:19:52.035 "data_offset": 0, 00:19:52.035 "data_size": 63488 00:19:52.035 }, 00:19:52.035 { 00:19:52.035 "name": "BaseBdev2", 00:19:52.035 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:52.035 "is_configured": true, 00:19:52.035 "data_offset": 2048, 00:19:52.035 "data_size": 63488 00:19:52.035 }, 00:19:52.035 { 00:19:52.035 "name": "BaseBdev3", 00:19:52.035 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:52.035 "is_configured": true, 00:19:52.035 "data_offset": 2048, 00:19:52.035 "data_size": 63488 00:19:52.035 }, 00:19:52.035 { 00:19:52.035 "name": "BaseBdev4", 00:19:52.035 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:52.035 "is_configured": true, 00:19:52.035 "data_offset": 2048, 00:19:52.035 "data_size": 63488 00:19:52.035 } 00:19:52.035 ] 00:19:52.035 }' 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.035 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.035 [2024-10-01 13:54:02.151662] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.035 [2024-10-01 13:54:02.151841] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:52.035 [2024-10-01 13:54:02.151866] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:52.035 request: 00:19:52.035 { 00:19:52.036 "base_bdev": "BaseBdev1", 00:19:52.036 "raid_bdev": "raid_bdev1", 00:19:52.036 "method": "bdev_raid_add_base_bdev", 00:19:52.036 "req_id": 1 00:19:52.036 } 00:19:52.036 Got JSON-RPC error response 00:19:52.036 response: 00:19:52.036 { 00:19:52.036 "code": -22, 00:19:52.036 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:52.036 } 00:19:52.036 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:52.036 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:52.036 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:52.036 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:52.036 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:52.036 13:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.413 "name": "raid_bdev1", 00:19:53.413 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:53.413 "strip_size_kb": 64, 00:19:53.413 "state": "online", 00:19:53.413 "raid_level": "raid5f", 00:19:53.413 "superblock": true, 00:19:53.413 "num_base_bdevs": 4, 00:19:53.413 "num_base_bdevs_discovered": 3, 00:19:53.413 "num_base_bdevs_operational": 3, 00:19:53.413 "base_bdevs_list": [ 00:19:53.413 { 00:19:53.413 "name": null, 00:19:53.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.413 "is_configured": false, 00:19:53.413 "data_offset": 0, 00:19:53.413 "data_size": 63488 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "name": "BaseBdev2", 00:19:53.413 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:53.413 "is_configured": true, 00:19:53.413 "data_offset": 2048, 00:19:53.413 "data_size": 63488 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "name": "BaseBdev3", 00:19:53.413 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:53.413 "is_configured": true, 00:19:53.413 "data_offset": 2048, 00:19:53.413 "data_size": 63488 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "name": "BaseBdev4", 00:19:53.413 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:53.413 "is_configured": true, 00:19:53.413 "data_offset": 2048, 00:19:53.413 "data_size": 63488 00:19:53.413 } 00:19:53.413 ] 00:19:53.413 }' 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.413 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.672 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.672 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.673 "name": "raid_bdev1", 00:19:53.673 "uuid": "61a33429-e461-4eb3-8f8a-67b71f8dde10", 00:19:53.673 "strip_size_kb": 64, 00:19:53.673 "state": "online", 00:19:53.673 "raid_level": "raid5f", 00:19:53.673 "superblock": true, 00:19:53.673 "num_base_bdevs": 4, 00:19:53.673 "num_base_bdevs_discovered": 3, 00:19:53.673 "num_base_bdevs_operational": 3, 00:19:53.673 "base_bdevs_list": [ 00:19:53.673 { 00:19:53.673 "name": null, 00:19:53.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.673 "is_configured": false, 00:19:53.673 "data_offset": 0, 00:19:53.673 "data_size": 63488 00:19:53.673 }, 00:19:53.673 { 00:19:53.673 "name": "BaseBdev2", 00:19:53.673 "uuid": "d62c8230-c200-5b34-9341-b635da8b6932", 00:19:53.673 "is_configured": true, 00:19:53.673 "data_offset": 2048, 00:19:53.673 "data_size": 63488 00:19:53.673 }, 00:19:53.673 { 00:19:53.673 "name": "BaseBdev3", 00:19:53.673 "uuid": "501d03b9-6ac8-5494-a53c-92c2d59a551e", 00:19:53.673 "is_configured": true, 00:19:53.673 "data_offset": 2048, 00:19:53.673 "data_size": 63488 00:19:53.673 }, 00:19:53.673 { 00:19:53.673 "name": "BaseBdev4", 00:19:53.673 "uuid": "905e60d7-5efb-5b3e-a504-5eee4f7a2ae7", 00:19:53.673 "is_configured": true, 00:19:53.673 "data_offset": 2048, 00:19:53.673 "data_size": 63488 00:19:53.673 } 00:19:53.673 ] 00:19:53.673 }' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85175 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85175 ']' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85175 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85175 00:19:53.673 killing process with pid 85175 00:19:53.673 Received shutdown signal, test time was about 60.000000 seconds 00:19:53.673 00:19:53.673 Latency(us) 00:19:53.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.673 =================================================================================================================== 00:19:53.673 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85175' 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85175 00:19:53.673 [2024-10-01 13:54:03.790697] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.673 [2024-10-01 13:54:03.790842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.673 [2024-10-01 13:54:03.790930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.673 [2024-10-01 13:54:03.790947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:53.673 13:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85175 00:19:54.239 [2024-10-01 13:54:04.321329] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:55.682 13:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:55.682 00:19:55.682 real 0m27.773s 00:19:55.682 user 0m34.804s 00:19:55.682 sys 0m3.579s 00:19:55.682 13:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.682 ************************************ 00:19:55.682 END TEST raid5f_rebuild_test_sb 00:19:55.682 ************************************ 00:19:55.682 13:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.682 13:54:05 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:55.682 13:54:05 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:55.682 13:54:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:55.682 13:54:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.682 13:54:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:55.682 ************************************ 00:19:55.682 START TEST raid_state_function_test_sb_4k 00:19:55.682 ************************************ 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:55.682 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85991 00:19:55.683 Process raid pid: 85991 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85991' 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85991 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 85991 ']' 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.683 13:54:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.683 [2024-10-01 13:54:05.828901] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:55.683 [2024-10-01 13:54:05.829036] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.942 [2024-10-01 13:54:06.002816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.200 [2024-10-01 13:54:06.234070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.458 [2024-10-01 13:54:06.460356] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.458 [2024-10-01 13:54:06.460399] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.717 [2024-10-01 13:54:06.751842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:56.717 [2024-10-01 13:54:06.752067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:56.717 [2024-10-01 13:54:06.752183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:56.717 [2024-10-01 13:54:06.752233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.717 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.718 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.718 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.718 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.718 "name": "Existed_Raid", 00:19:56.718 "uuid": "ada8aa66-d667-4600-90da-12a405209470", 00:19:56.718 "strip_size_kb": 0, 00:19:56.718 "state": "configuring", 00:19:56.718 "raid_level": "raid1", 00:19:56.718 "superblock": true, 00:19:56.718 "num_base_bdevs": 2, 00:19:56.718 "num_base_bdevs_discovered": 0, 00:19:56.718 "num_base_bdevs_operational": 2, 00:19:56.718 "base_bdevs_list": [ 00:19:56.718 { 00:19:56.718 "name": "BaseBdev1", 00:19:56.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.718 "is_configured": false, 00:19:56.718 "data_offset": 0, 00:19:56.718 "data_size": 0 00:19:56.718 }, 00:19:56.718 { 00:19:56.718 "name": "BaseBdev2", 00:19:56.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.718 "is_configured": false, 00:19:56.718 "data_offset": 0, 00:19:56.718 "data_size": 0 00:19:56.718 } 00:19:56.718 ] 00:19:56.718 }' 00:19:56.718 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.718 13:54:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.286 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:57.286 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.286 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.286 [2024-10-01 13:54:07.219632] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.286 [2024-10-01 13:54:07.219807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:57.286 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 [2024-10-01 13:54:07.231672] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:57.287 [2024-10-01 13:54:07.231830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:57.287 [2024-10-01 13:54:07.231922] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.287 [2024-10-01 13:54:07.232044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 [2024-10-01 13:54:07.304229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.287 BaseBdev1 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 [ 00:19:57.287 { 00:19:57.287 "name": "BaseBdev1", 00:19:57.287 "aliases": [ 00:19:57.287 "508d33cd-4f31-4895-a911-038503d07be8" 00:19:57.287 ], 00:19:57.287 "product_name": "Malloc disk", 00:19:57.287 "block_size": 4096, 00:19:57.287 "num_blocks": 8192, 00:19:57.287 "uuid": "508d33cd-4f31-4895-a911-038503d07be8", 00:19:57.287 "assigned_rate_limits": { 00:19:57.287 "rw_ios_per_sec": 0, 00:19:57.287 "rw_mbytes_per_sec": 0, 00:19:57.287 "r_mbytes_per_sec": 0, 00:19:57.287 "w_mbytes_per_sec": 0 00:19:57.287 }, 00:19:57.287 "claimed": true, 00:19:57.287 "claim_type": "exclusive_write", 00:19:57.287 "zoned": false, 00:19:57.287 "supported_io_types": { 00:19:57.287 "read": true, 00:19:57.287 "write": true, 00:19:57.287 "unmap": true, 00:19:57.287 "flush": true, 00:19:57.287 "reset": true, 00:19:57.287 "nvme_admin": false, 00:19:57.287 "nvme_io": false, 00:19:57.287 "nvme_io_md": false, 00:19:57.287 "write_zeroes": true, 00:19:57.287 "zcopy": true, 00:19:57.287 "get_zone_info": false, 00:19:57.287 "zone_management": false, 00:19:57.287 "zone_append": false, 00:19:57.287 "compare": false, 00:19:57.287 "compare_and_write": false, 00:19:57.287 "abort": true, 00:19:57.287 "seek_hole": false, 00:19:57.287 "seek_data": false, 00:19:57.287 "copy": true, 00:19:57.287 "nvme_iov_md": false 00:19:57.287 }, 00:19:57.287 "memory_domains": [ 00:19:57.287 { 00:19:57.287 "dma_device_id": "system", 00:19:57.287 "dma_device_type": 1 00:19:57.287 }, 00:19:57.287 { 00:19:57.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.287 "dma_device_type": 2 00:19:57.287 } 00:19:57.287 ], 00:19:57.287 "driver_specific": {} 00:19:57.287 } 00:19:57.287 ] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.287 "name": "Existed_Raid", 00:19:57.287 "uuid": "f535221e-b143-4af0-b7a0-598854f48bcf", 00:19:57.287 "strip_size_kb": 0, 00:19:57.287 "state": "configuring", 00:19:57.287 "raid_level": "raid1", 00:19:57.287 "superblock": true, 00:19:57.287 "num_base_bdevs": 2, 00:19:57.287 "num_base_bdevs_discovered": 1, 00:19:57.287 "num_base_bdevs_operational": 2, 00:19:57.287 "base_bdevs_list": [ 00:19:57.287 { 00:19:57.287 "name": "BaseBdev1", 00:19:57.287 "uuid": "508d33cd-4f31-4895-a911-038503d07be8", 00:19:57.287 "is_configured": true, 00:19:57.287 "data_offset": 256, 00:19:57.287 "data_size": 7936 00:19:57.287 }, 00:19:57.287 { 00:19:57.287 "name": "BaseBdev2", 00:19:57.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.287 "is_configured": false, 00:19:57.287 "data_offset": 0, 00:19:57.287 "data_size": 0 00:19:57.287 } 00:19:57.287 ] 00:19:57.287 }' 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.287 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.856 [2024-10-01 13:54:07.755657] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.856 [2024-10-01 13:54:07.755710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.856 [2024-10-01 13:54:07.767696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.856 [2024-10-01 13:54:07.769960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.856 [2024-10-01 13:54:07.770011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.856 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.857 "name": "Existed_Raid", 00:19:57.857 "uuid": "81f32a50-cfe5-464c-9c91-529f65c3b1c0", 00:19:57.857 "strip_size_kb": 0, 00:19:57.857 "state": "configuring", 00:19:57.857 "raid_level": "raid1", 00:19:57.857 "superblock": true, 00:19:57.857 "num_base_bdevs": 2, 00:19:57.857 "num_base_bdevs_discovered": 1, 00:19:57.857 "num_base_bdevs_operational": 2, 00:19:57.857 "base_bdevs_list": [ 00:19:57.857 { 00:19:57.857 "name": "BaseBdev1", 00:19:57.857 "uuid": "508d33cd-4f31-4895-a911-038503d07be8", 00:19:57.857 "is_configured": true, 00:19:57.857 "data_offset": 256, 00:19:57.857 "data_size": 7936 00:19:57.857 }, 00:19:57.857 { 00:19:57.857 "name": "BaseBdev2", 00:19:57.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.857 "is_configured": false, 00:19:57.857 "data_offset": 0, 00:19:57.857 "data_size": 0 00:19:57.857 } 00:19:57.857 ] 00:19:57.857 }' 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.857 13:54:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 [2024-10-01 13:54:08.263195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:58.141 [2024-10-01 13:54:08.263519] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:58.141 [2024-10-01 13:54:08.263537] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:58.141 [2024-10-01 13:54:08.263832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:58.141 [2024-10-01 13:54:08.263986] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:58.141 [2024-10-01 13:54:08.264008] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:58.141 BaseBdev2 00:19:58.141 [2024-10-01 13:54:08.264163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:58.141 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.142 [ 00:19:58.142 { 00:19:58.142 "name": "BaseBdev2", 00:19:58.142 "aliases": [ 00:19:58.142 "b5156ccb-f560-4581-ad55-7db9f797b690" 00:19:58.142 ], 00:19:58.142 "product_name": "Malloc disk", 00:19:58.142 "block_size": 4096, 00:19:58.142 "num_blocks": 8192, 00:19:58.142 "uuid": "b5156ccb-f560-4581-ad55-7db9f797b690", 00:19:58.142 "assigned_rate_limits": { 00:19:58.142 "rw_ios_per_sec": 0, 00:19:58.142 "rw_mbytes_per_sec": 0, 00:19:58.142 "r_mbytes_per_sec": 0, 00:19:58.142 "w_mbytes_per_sec": 0 00:19:58.142 }, 00:19:58.142 "claimed": true, 00:19:58.142 "claim_type": "exclusive_write", 00:19:58.142 "zoned": false, 00:19:58.142 "supported_io_types": { 00:19:58.142 "read": true, 00:19:58.142 "write": true, 00:19:58.142 "unmap": true, 00:19:58.142 "flush": true, 00:19:58.142 "reset": true, 00:19:58.142 "nvme_admin": false, 00:19:58.142 "nvme_io": false, 00:19:58.142 "nvme_io_md": false, 00:19:58.142 "write_zeroes": true, 00:19:58.142 "zcopy": true, 00:19:58.142 "get_zone_info": false, 00:19:58.142 "zone_management": false, 00:19:58.142 "zone_append": false, 00:19:58.142 "compare": false, 00:19:58.142 "compare_and_write": false, 00:19:58.142 "abort": true, 00:19:58.142 "seek_hole": false, 00:19:58.142 "seek_data": false, 00:19:58.142 "copy": true, 00:19:58.142 "nvme_iov_md": false 00:19:58.142 }, 00:19:58.142 "memory_domains": [ 00:19:58.142 { 00:19:58.142 "dma_device_id": "system", 00:19:58.142 "dma_device_type": 1 00:19:58.142 }, 00:19:58.142 { 00:19:58.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.142 "dma_device_type": 2 00:19:58.142 } 00:19:58.142 ], 00:19:58.142 "driver_specific": {} 00:19:58.142 } 00:19:58.142 ] 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:19:58.142 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.438 "name": "Existed_Raid", 00:19:58.438 "uuid": "81f32a50-cfe5-464c-9c91-529f65c3b1c0", 00:19:58.438 "strip_size_kb": 0, 00:19:58.438 "state": "online", 00:19:58.438 "raid_level": "raid1", 00:19:58.438 "superblock": true, 00:19:58.438 "num_base_bdevs": 2, 00:19:58.438 "num_base_bdevs_discovered": 2, 00:19:58.438 "num_base_bdevs_operational": 2, 00:19:58.438 "base_bdevs_list": [ 00:19:58.438 { 00:19:58.438 "name": "BaseBdev1", 00:19:58.438 "uuid": "508d33cd-4f31-4895-a911-038503d07be8", 00:19:58.438 "is_configured": true, 00:19:58.438 "data_offset": 256, 00:19:58.438 "data_size": 7936 00:19:58.438 }, 00:19:58.438 { 00:19:58.438 "name": "BaseBdev2", 00:19:58.438 "uuid": "b5156ccb-f560-4581-ad55-7db9f797b690", 00:19:58.438 "is_configured": true, 00:19:58.438 "data_offset": 256, 00:19:58.438 "data_size": 7936 00:19:58.438 } 00:19:58.438 ] 00:19:58.438 }' 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.438 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.697 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:58.698 [2024-10-01 13:54:08.758858] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:58.698 "name": "Existed_Raid", 00:19:58.698 "aliases": [ 00:19:58.698 "81f32a50-cfe5-464c-9c91-529f65c3b1c0" 00:19:58.698 ], 00:19:58.698 "product_name": "Raid Volume", 00:19:58.698 "block_size": 4096, 00:19:58.698 "num_blocks": 7936, 00:19:58.698 "uuid": "81f32a50-cfe5-464c-9c91-529f65c3b1c0", 00:19:58.698 "assigned_rate_limits": { 00:19:58.698 "rw_ios_per_sec": 0, 00:19:58.698 "rw_mbytes_per_sec": 0, 00:19:58.698 "r_mbytes_per_sec": 0, 00:19:58.698 "w_mbytes_per_sec": 0 00:19:58.698 }, 00:19:58.698 "claimed": false, 00:19:58.698 "zoned": false, 00:19:58.698 "supported_io_types": { 00:19:58.698 "read": true, 00:19:58.698 "write": true, 00:19:58.698 "unmap": false, 00:19:58.698 "flush": false, 00:19:58.698 "reset": true, 00:19:58.698 "nvme_admin": false, 00:19:58.698 "nvme_io": false, 00:19:58.698 "nvme_io_md": false, 00:19:58.698 "write_zeroes": true, 00:19:58.698 "zcopy": false, 00:19:58.698 "get_zone_info": false, 00:19:58.698 "zone_management": false, 00:19:58.698 "zone_append": false, 00:19:58.698 "compare": false, 00:19:58.698 "compare_and_write": false, 00:19:58.698 "abort": false, 00:19:58.698 "seek_hole": false, 00:19:58.698 "seek_data": false, 00:19:58.698 "copy": false, 00:19:58.698 "nvme_iov_md": false 00:19:58.698 }, 00:19:58.698 "memory_domains": [ 00:19:58.698 { 00:19:58.698 "dma_device_id": "system", 00:19:58.698 "dma_device_type": 1 00:19:58.698 }, 00:19:58.698 { 00:19:58.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.698 "dma_device_type": 2 00:19:58.698 }, 00:19:58.698 { 00:19:58.698 "dma_device_id": "system", 00:19:58.698 "dma_device_type": 1 00:19:58.698 }, 00:19:58.698 { 00:19:58.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.698 "dma_device_type": 2 00:19:58.698 } 00:19:58.698 ], 00:19:58.698 "driver_specific": { 00:19:58.698 "raid": { 00:19:58.698 "uuid": "81f32a50-cfe5-464c-9c91-529f65c3b1c0", 00:19:58.698 "strip_size_kb": 0, 00:19:58.698 "state": "online", 00:19:58.698 "raid_level": "raid1", 00:19:58.698 "superblock": true, 00:19:58.698 "num_base_bdevs": 2, 00:19:58.698 "num_base_bdevs_discovered": 2, 00:19:58.698 "num_base_bdevs_operational": 2, 00:19:58.698 "base_bdevs_list": [ 00:19:58.698 { 00:19:58.698 "name": "BaseBdev1", 00:19:58.698 "uuid": "508d33cd-4f31-4895-a911-038503d07be8", 00:19:58.698 "is_configured": true, 00:19:58.698 "data_offset": 256, 00:19:58.698 "data_size": 7936 00:19:58.698 }, 00:19:58.698 { 00:19:58.698 "name": "BaseBdev2", 00:19:58.698 "uuid": "b5156ccb-f560-4581-ad55-7db9f797b690", 00:19:58.698 "is_configured": true, 00:19:58.698 "data_offset": 256, 00:19:58.698 "data_size": 7936 00:19:58.698 } 00:19:58.698 ] 00:19:58.698 } 00:19:58.698 } 00:19:58.698 }' 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:58.698 BaseBdev2' 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.698 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.956 13:54:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.956 [2024-10-01 13:54:08.978333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.956 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.957 "name": "Existed_Raid", 00:19:58.957 "uuid": "81f32a50-cfe5-464c-9c91-529f65c3b1c0", 00:19:58.957 "strip_size_kb": 0, 00:19:58.957 "state": "online", 00:19:58.957 "raid_level": "raid1", 00:19:58.957 "superblock": true, 00:19:58.957 "num_base_bdevs": 2, 00:19:58.957 "num_base_bdevs_discovered": 1, 00:19:58.957 "num_base_bdevs_operational": 1, 00:19:58.957 "base_bdevs_list": [ 00:19:58.957 { 00:19:58.957 "name": null, 00:19:58.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.957 "is_configured": false, 00:19:58.957 "data_offset": 0, 00:19:58.957 "data_size": 7936 00:19:58.957 }, 00:19:58.957 { 00:19:58.957 "name": "BaseBdev2", 00:19:58.957 "uuid": "b5156ccb-f560-4581-ad55-7db9f797b690", 00:19:58.957 "is_configured": true, 00:19:58.957 "data_offset": 256, 00:19:58.957 "data_size": 7936 00:19:58.957 } 00:19:58.957 ] 00:19:58.957 }' 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.957 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.524 [2024-10-01 13:54:09.602896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:59.524 [2024-10-01 13:54:09.603132] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.524 [2024-10-01 13:54:09.708072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.524 [2024-10-01 13:54:09.708129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.524 [2024-10-01 13:54:09.708146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:59.524 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85991 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 85991 ']' 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 85991 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85991 00:19:59.783 killing process with pid 85991 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85991' 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 85991 00:19:59.783 [2024-10-01 13:54:09.816189] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.783 13:54:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 85991 00:19:59.783 [2024-10-01 13:54:09.834337] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.158 13:54:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:01.158 00:20:01.158 real 0m5.487s 00:20:01.158 user 0m7.751s 00:20:01.158 sys 0m1.009s 00:20:01.158 13:54:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.158 ************************************ 00:20:01.158 END TEST raid_state_function_test_sb_4k 00:20:01.158 ************************************ 00:20:01.158 13:54:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 13:54:11 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:01.158 13:54:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:01.158 13:54:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.158 13:54:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 ************************************ 00:20:01.158 START TEST raid_superblock_test_4k 00:20:01.158 ************************************ 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86249 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86249 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86249 ']' 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.158 13:54:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.416 [2024-10-01 13:54:11.394046] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:01.416 [2024-10-01 13:54:11.394181] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86249 ] 00:20:01.416 [2024-10-01 13:54:11.560615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.674 [2024-10-01 13:54:11.775722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.933 [2024-10-01 13:54:11.979374] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.933 [2024-10-01 13:54:11.979448] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.192 malloc1 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.192 [2024-10-01 13:54:12.295882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:02.192 [2024-10-01 13:54:12.295963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.192 [2024-10-01 13:54:12.296003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:02.192 [2024-10-01 13:54:12.296019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.192 [2024-10-01 13:54:12.298565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.192 [2024-10-01 13:54:12.298739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:02.192 pt1 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.192 malloc2 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.192 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.192 [2024-10-01 13:54:12.380355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.192 [2024-10-01 13:54:12.380553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.192 [2024-10-01 13:54:12.380646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:02.192 [2024-10-01 13:54:12.380720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.193 [2024-10-01 13:54:12.383202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.193 [2024-10-01 13:54:12.383340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.452 pt2 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.452 [2024-10-01 13:54:12.396419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:02.452 [2024-10-01 13:54:12.398626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.452 [2024-10-01 13:54:12.398919] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:02.452 [2024-10-01 13:54:12.398940] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:02.452 [2024-10-01 13:54:12.399231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:02.452 [2024-10-01 13:54:12.399387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:02.452 [2024-10-01 13:54:12.399403] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:02.452 [2024-10-01 13:54:12.399591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.452 "name": "raid_bdev1", 00:20:02.452 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:02.452 "strip_size_kb": 0, 00:20:02.452 "state": "online", 00:20:02.452 "raid_level": "raid1", 00:20:02.452 "superblock": true, 00:20:02.452 "num_base_bdevs": 2, 00:20:02.452 "num_base_bdevs_discovered": 2, 00:20:02.452 "num_base_bdevs_operational": 2, 00:20:02.452 "base_bdevs_list": [ 00:20:02.452 { 00:20:02.452 "name": "pt1", 00:20:02.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.452 "is_configured": true, 00:20:02.452 "data_offset": 256, 00:20:02.452 "data_size": 7936 00:20:02.452 }, 00:20:02.452 { 00:20:02.452 "name": "pt2", 00:20:02.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.452 "is_configured": true, 00:20:02.452 "data_offset": 256, 00:20:02.452 "data_size": 7936 00:20:02.452 } 00:20:02.452 ] 00:20:02.452 }' 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.452 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:02.711 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.712 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.712 [2024-10-01 13:54:12.828038] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.712 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.712 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:02.712 "name": "raid_bdev1", 00:20:02.712 "aliases": [ 00:20:02.712 "4a63567b-334c-4073-bf80-eff0d3257e6b" 00:20:02.712 ], 00:20:02.712 "product_name": "Raid Volume", 00:20:02.712 "block_size": 4096, 00:20:02.712 "num_blocks": 7936, 00:20:02.712 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:02.712 "assigned_rate_limits": { 00:20:02.712 "rw_ios_per_sec": 0, 00:20:02.712 "rw_mbytes_per_sec": 0, 00:20:02.712 "r_mbytes_per_sec": 0, 00:20:02.712 "w_mbytes_per_sec": 0 00:20:02.712 }, 00:20:02.712 "claimed": false, 00:20:02.712 "zoned": false, 00:20:02.712 "supported_io_types": { 00:20:02.712 "read": true, 00:20:02.712 "write": true, 00:20:02.712 "unmap": false, 00:20:02.712 "flush": false, 00:20:02.712 "reset": true, 00:20:02.712 "nvme_admin": false, 00:20:02.712 "nvme_io": false, 00:20:02.712 "nvme_io_md": false, 00:20:02.712 "write_zeroes": true, 00:20:02.712 "zcopy": false, 00:20:02.712 "get_zone_info": false, 00:20:02.712 "zone_management": false, 00:20:02.712 "zone_append": false, 00:20:02.712 "compare": false, 00:20:02.712 "compare_and_write": false, 00:20:02.712 "abort": false, 00:20:02.712 "seek_hole": false, 00:20:02.712 "seek_data": false, 00:20:02.712 "copy": false, 00:20:02.712 "nvme_iov_md": false 00:20:02.712 }, 00:20:02.712 "memory_domains": [ 00:20:02.712 { 00:20:02.712 "dma_device_id": "system", 00:20:02.712 "dma_device_type": 1 00:20:02.712 }, 00:20:02.712 { 00:20:02.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.712 "dma_device_type": 2 00:20:02.712 }, 00:20:02.712 { 00:20:02.712 "dma_device_id": "system", 00:20:02.712 "dma_device_type": 1 00:20:02.712 }, 00:20:02.712 { 00:20:02.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.712 "dma_device_type": 2 00:20:02.712 } 00:20:02.712 ], 00:20:02.712 "driver_specific": { 00:20:02.712 "raid": { 00:20:02.712 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:02.712 "strip_size_kb": 0, 00:20:02.712 "state": "online", 00:20:02.712 "raid_level": "raid1", 00:20:02.712 "superblock": true, 00:20:02.712 "num_base_bdevs": 2, 00:20:02.712 "num_base_bdevs_discovered": 2, 00:20:02.712 "num_base_bdevs_operational": 2, 00:20:02.712 "base_bdevs_list": [ 00:20:02.712 { 00:20:02.712 "name": "pt1", 00:20:02.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.712 "is_configured": true, 00:20:02.712 "data_offset": 256, 00:20:02.712 "data_size": 7936 00:20:02.712 }, 00:20:02.712 { 00:20:02.712 "name": "pt2", 00:20:02.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.712 "is_configured": true, 00:20:02.712 "data_offset": 256, 00:20:02.712 "data_size": 7936 00:20:02.712 } 00:20:02.712 ] 00:20:02.712 } 00:20:02.712 } 00:20:02.712 }' 00:20:02.712 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:02.972 pt2' 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:02.972 13:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:02.972 [2024-10-01 13:54:13.055870] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4a63567b-334c-4073-bf80-eff0d3257e6b 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 4a63567b-334c-4073-bf80-eff0d3257e6b ']' 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 [2024-10-01 13:54:13.103626] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.972 [2024-10-01 13:54:13.103654] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.972 [2024-10-01 13:54:13.103738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.972 [2024-10-01 13:54:13.103802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.972 [2024-10-01 13:54:13.103817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.972 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.232 [2024-10-01 13:54:13.239655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:03.232 [2024-10-01 13:54:13.241929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:03.232 [2024-10-01 13:54:13.242015] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:03.232 [2024-10-01 13:54:13.242069] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:03.232 [2024-10-01 13:54:13.242088] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.232 [2024-10-01 13:54:13.242118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:03.232 request: 00:20:03.232 { 00:20:03.232 "name": "raid_bdev1", 00:20:03.232 "raid_level": "raid1", 00:20:03.232 "base_bdevs": [ 00:20:03.232 "malloc1", 00:20:03.232 "malloc2" 00:20:03.232 ], 00:20:03.232 "superblock": false, 00:20:03.232 "method": "bdev_raid_create", 00:20:03.232 "req_id": 1 00:20:03.232 } 00:20:03.232 Got JSON-RPC error response 00:20:03.232 response: 00:20:03.232 { 00:20:03.232 "code": -17, 00:20:03.232 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:03.232 } 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.232 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.232 [2024-10-01 13:54:13.295610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:03.232 [2024-10-01 13:54:13.295672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.232 [2024-10-01 13:54:13.295692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:03.232 [2024-10-01 13:54:13.295706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.233 [2024-10-01 13:54:13.298263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.233 [2024-10-01 13:54:13.298306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:03.233 [2024-10-01 13:54:13.298410] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:03.233 [2024-10-01 13:54:13.298483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:03.233 pt1 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.233 "name": "raid_bdev1", 00:20:03.233 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:03.233 "strip_size_kb": 0, 00:20:03.233 "state": "configuring", 00:20:03.233 "raid_level": "raid1", 00:20:03.233 "superblock": true, 00:20:03.233 "num_base_bdevs": 2, 00:20:03.233 "num_base_bdevs_discovered": 1, 00:20:03.233 "num_base_bdevs_operational": 2, 00:20:03.233 "base_bdevs_list": [ 00:20:03.233 { 00:20:03.233 "name": "pt1", 00:20:03.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.233 "is_configured": true, 00:20:03.233 "data_offset": 256, 00:20:03.233 "data_size": 7936 00:20:03.233 }, 00:20:03.233 { 00:20:03.233 "name": null, 00:20:03.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.233 "is_configured": false, 00:20:03.233 "data_offset": 256, 00:20:03.233 "data_size": 7936 00:20:03.233 } 00:20:03.233 ] 00:20:03.233 }' 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.233 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.802 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:03.802 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:03.802 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:03.802 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:03.802 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.802 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.802 [2024-10-01 13:54:13.699467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:03.802 [2024-10-01 13:54:13.699569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.802 [2024-10-01 13:54:13.699594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:03.802 [2024-10-01 13:54:13.699610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.802 [2024-10-01 13:54:13.700121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.803 [2024-10-01 13:54:13.700155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:03.803 [2024-10-01 13:54:13.700241] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:03.803 [2024-10-01 13:54:13.700271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:03.803 [2024-10-01 13:54:13.700416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:03.803 [2024-10-01 13:54:13.700431] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:03.803 [2024-10-01 13:54:13.700703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:03.803 [2024-10-01 13:54:13.700856] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:03.803 [2024-10-01 13:54:13.700866] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:03.803 [2024-10-01 13:54:13.701008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.803 pt2 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.803 "name": "raid_bdev1", 00:20:03.803 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:03.803 "strip_size_kb": 0, 00:20:03.803 "state": "online", 00:20:03.803 "raid_level": "raid1", 00:20:03.803 "superblock": true, 00:20:03.803 "num_base_bdevs": 2, 00:20:03.803 "num_base_bdevs_discovered": 2, 00:20:03.803 "num_base_bdevs_operational": 2, 00:20:03.803 "base_bdevs_list": [ 00:20:03.803 { 00:20:03.803 "name": "pt1", 00:20:03.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.803 "is_configured": true, 00:20:03.803 "data_offset": 256, 00:20:03.803 "data_size": 7936 00:20:03.803 }, 00:20:03.803 { 00:20:03.803 "name": "pt2", 00:20:03.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.803 "is_configured": true, 00:20:03.803 "data_offset": 256, 00:20:03.803 "data_size": 7936 00:20:03.803 } 00:20:03.803 ] 00:20:03.803 }' 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.803 13:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:04.062 [2024-10-01 13:54:14.175048] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.062 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.062 "name": "raid_bdev1", 00:20:04.062 "aliases": [ 00:20:04.062 "4a63567b-334c-4073-bf80-eff0d3257e6b" 00:20:04.062 ], 00:20:04.062 "product_name": "Raid Volume", 00:20:04.063 "block_size": 4096, 00:20:04.063 "num_blocks": 7936, 00:20:04.063 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:04.063 "assigned_rate_limits": { 00:20:04.063 "rw_ios_per_sec": 0, 00:20:04.063 "rw_mbytes_per_sec": 0, 00:20:04.063 "r_mbytes_per_sec": 0, 00:20:04.063 "w_mbytes_per_sec": 0 00:20:04.063 }, 00:20:04.063 "claimed": false, 00:20:04.063 "zoned": false, 00:20:04.063 "supported_io_types": { 00:20:04.063 "read": true, 00:20:04.063 "write": true, 00:20:04.063 "unmap": false, 00:20:04.063 "flush": false, 00:20:04.063 "reset": true, 00:20:04.063 "nvme_admin": false, 00:20:04.063 "nvme_io": false, 00:20:04.063 "nvme_io_md": false, 00:20:04.063 "write_zeroes": true, 00:20:04.063 "zcopy": false, 00:20:04.063 "get_zone_info": false, 00:20:04.063 "zone_management": false, 00:20:04.063 "zone_append": false, 00:20:04.063 "compare": false, 00:20:04.063 "compare_and_write": false, 00:20:04.063 "abort": false, 00:20:04.063 "seek_hole": false, 00:20:04.063 "seek_data": false, 00:20:04.063 "copy": false, 00:20:04.063 "nvme_iov_md": false 00:20:04.063 }, 00:20:04.063 "memory_domains": [ 00:20:04.063 { 00:20:04.063 "dma_device_id": "system", 00:20:04.063 "dma_device_type": 1 00:20:04.063 }, 00:20:04.063 { 00:20:04.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.063 "dma_device_type": 2 00:20:04.063 }, 00:20:04.063 { 00:20:04.063 "dma_device_id": "system", 00:20:04.063 "dma_device_type": 1 00:20:04.063 }, 00:20:04.063 { 00:20:04.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.063 "dma_device_type": 2 00:20:04.063 } 00:20:04.063 ], 00:20:04.063 "driver_specific": { 00:20:04.063 "raid": { 00:20:04.063 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:04.063 "strip_size_kb": 0, 00:20:04.063 "state": "online", 00:20:04.063 "raid_level": "raid1", 00:20:04.063 "superblock": true, 00:20:04.063 "num_base_bdevs": 2, 00:20:04.063 "num_base_bdevs_discovered": 2, 00:20:04.063 "num_base_bdevs_operational": 2, 00:20:04.063 "base_bdevs_list": [ 00:20:04.063 { 00:20:04.063 "name": "pt1", 00:20:04.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.063 "is_configured": true, 00:20:04.063 "data_offset": 256, 00:20:04.063 "data_size": 7936 00:20:04.063 }, 00:20:04.063 { 00:20:04.063 "name": "pt2", 00:20:04.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.063 "is_configured": true, 00:20:04.063 "data_offset": 256, 00:20:04.063 "data_size": 7936 00:20:04.063 } 00:20:04.063 ] 00:20:04.063 } 00:20:04.063 } 00:20:04.063 }' 00:20:04.063 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:04.322 pt2' 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:04.322 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 [2024-10-01 13:54:14.410751] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 4a63567b-334c-4073-bf80-eff0d3257e6b '!=' 4a63567b-334c-4073-bf80-eff0d3257e6b ']' 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 [2024-10-01 13:54:14.454492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.323 "name": "raid_bdev1", 00:20:04.323 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:04.323 "strip_size_kb": 0, 00:20:04.323 "state": "online", 00:20:04.323 "raid_level": "raid1", 00:20:04.323 "superblock": true, 00:20:04.323 "num_base_bdevs": 2, 00:20:04.323 "num_base_bdevs_discovered": 1, 00:20:04.323 "num_base_bdevs_operational": 1, 00:20:04.323 "base_bdevs_list": [ 00:20:04.323 { 00:20:04.323 "name": null, 00:20:04.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.323 "is_configured": false, 00:20:04.323 "data_offset": 0, 00:20:04.323 "data_size": 7936 00:20:04.323 }, 00:20:04.323 { 00:20:04.323 "name": "pt2", 00:20:04.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.323 "is_configured": true, 00:20:04.323 "data_offset": 256, 00:20:04.323 "data_size": 7936 00:20:04.323 } 00:20:04.323 ] 00:20:04.323 }' 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.323 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.901 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:04.901 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.901 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.902 [2024-10-01 13:54:14.905815] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.902 [2024-10-01 13:54:14.905852] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.902 [2024-10-01 13:54:14.905941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.902 [2024-10-01 13:54:14.906007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.902 [2024-10-01 13:54:14.906021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.902 [2024-10-01 13:54:14.965729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:04.902 [2024-10-01 13:54:14.965796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.902 [2024-10-01 13:54:14.965816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:04.902 [2024-10-01 13:54:14.965831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.902 [2024-10-01 13:54:14.968374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.902 [2024-10-01 13:54:14.968432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:04.902 [2024-10-01 13:54:14.968523] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:04.902 [2024-10-01 13:54:14.968590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:04.902 [2024-10-01 13:54:14.968714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:04.902 [2024-10-01 13:54:14.968729] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:04.902 [2024-10-01 13:54:14.968995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:04.902 [2024-10-01 13:54:14.969154] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:04.902 [2024-10-01 13:54:14.969165] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:04.902 [2024-10-01 13:54:14.969308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.902 pt2 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.902 13:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.902 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.902 "name": "raid_bdev1", 00:20:04.902 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:04.902 "strip_size_kb": 0, 00:20:04.902 "state": "online", 00:20:04.902 "raid_level": "raid1", 00:20:04.902 "superblock": true, 00:20:04.902 "num_base_bdevs": 2, 00:20:04.902 "num_base_bdevs_discovered": 1, 00:20:04.902 "num_base_bdevs_operational": 1, 00:20:04.902 "base_bdevs_list": [ 00:20:04.902 { 00:20:04.902 "name": null, 00:20:04.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.902 "is_configured": false, 00:20:04.902 "data_offset": 256, 00:20:04.902 "data_size": 7936 00:20:04.902 }, 00:20:04.902 { 00:20:04.902 "name": "pt2", 00:20:04.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.902 "is_configured": true, 00:20:04.902 "data_offset": 256, 00:20:04.902 "data_size": 7936 00:20:04.902 } 00:20:04.902 ] 00:20:04.902 }' 00:20:04.902 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.902 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.471 [2024-10-01 13:54:15.441093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.471 [2024-10-01 13:54:15.441132] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.471 [2024-10-01 13:54:15.441210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.471 [2024-10-01 13:54:15.441263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.471 [2024-10-01 13:54:15.441275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.471 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.471 [2024-10-01 13:54:15.497036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:05.471 [2024-10-01 13:54:15.497104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.471 [2024-10-01 13:54:15.497127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:05.471 [2024-10-01 13:54:15.497139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.471 [2024-10-01 13:54:15.499787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.472 [2024-10-01 13:54:15.499831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:05.472 [2024-10-01 13:54:15.499928] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:05.472 [2024-10-01 13:54:15.499979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:05.472 [2024-10-01 13:54:15.500127] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:05.472 [2024-10-01 13:54:15.500148] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.472 [2024-10-01 13:54:15.500170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:05.472 [2024-10-01 13:54:15.500253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:05.472 [2024-10-01 13:54:15.500334] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:05.472 [2024-10-01 13:54:15.500345] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:05.472 [2024-10-01 13:54:15.500620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:05.472 [2024-10-01 13:54:15.500772] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:05.472 [2024-10-01 13:54:15.500793] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:05.472 [2024-10-01 13:54:15.501016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.472 pt1 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.472 "name": "raid_bdev1", 00:20:05.472 "uuid": "4a63567b-334c-4073-bf80-eff0d3257e6b", 00:20:05.472 "strip_size_kb": 0, 00:20:05.472 "state": "online", 00:20:05.472 "raid_level": "raid1", 00:20:05.472 "superblock": true, 00:20:05.472 "num_base_bdevs": 2, 00:20:05.472 "num_base_bdevs_discovered": 1, 00:20:05.472 "num_base_bdevs_operational": 1, 00:20:05.472 "base_bdevs_list": [ 00:20:05.472 { 00:20:05.472 "name": null, 00:20:05.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.472 "is_configured": false, 00:20:05.472 "data_offset": 256, 00:20:05.472 "data_size": 7936 00:20:05.472 }, 00:20:05.472 { 00:20:05.472 "name": "pt2", 00:20:05.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.472 "is_configured": true, 00:20:05.472 "data_offset": 256, 00:20:05.472 "data_size": 7936 00:20:05.472 } 00:20:05.472 ] 00:20:05.472 }' 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.472 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:06.041 13:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.041 [2024-10-01 13:54:15.980661] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 4a63567b-334c-4073-bf80-eff0d3257e6b '!=' 4a63567b-334c-4073-bf80-eff0d3257e6b ']' 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86249 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86249 ']' 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86249 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86249 00:20:06.041 killing process with pid 86249 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86249' 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86249 00:20:06.041 [2024-10-01 13:54:16.051807] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.041 [2024-10-01 13:54:16.051908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.041 13:54:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86249 00:20:06.041 [2024-10-01 13:54:16.051959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.041 [2024-10-01 13:54:16.051978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:06.300 [2024-10-01 13:54:16.271486] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.680 13:54:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:07.680 00:20:07.680 real 0m6.325s 00:20:07.680 user 0m9.379s 00:20:07.680 sys 0m1.291s 00:20:07.680 13:54:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.680 13:54:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.680 ************************************ 00:20:07.680 END TEST raid_superblock_test_4k 00:20:07.680 ************************************ 00:20:07.680 13:54:17 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:07.680 13:54:17 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:07.680 13:54:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:07.680 13:54:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.680 13:54:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.680 ************************************ 00:20:07.680 START TEST raid_rebuild_test_sb_4k 00:20:07.680 ************************************ 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86572 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86572 00:20:07.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86572 ']' 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.680 13:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.680 [2024-10-01 13:54:17.817414] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:07.680 [2024-10-01 13:54:17.817783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:07.680 Zero copy mechanism will not be used. 00:20:07.680 -allocations --file-prefix=spdk_pid86572 ] 00:20:07.939 [2024-10-01 13:54:18.006460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.197 [2024-10-01 13:54:18.229174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.464 [2024-10-01 13:54:18.452021] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.464 [2024-10-01 13:54:18.452091] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 BaseBdev1_malloc 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 [2024-10-01 13:54:18.882969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:08.723 [2024-10-01 13:54:18.883047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.723 [2024-10-01 13:54:18.883071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:08.723 [2024-10-01 13:54:18.883089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.723 [2024-10-01 13:54:18.885475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.723 [2024-10-01 13:54:18.885649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:08.723 BaseBdev1 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:08.723 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:08.724 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.724 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.984 BaseBdev2_malloc 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.984 [2024-10-01 13:54:18.949277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:08.984 [2024-10-01 13:54:18.949486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.984 [2024-10-01 13:54:18.949518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:08.984 [2024-10-01 13:54:18.949534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.984 [2024-10-01 13:54:18.951905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.984 [2024-10-01 13:54:18.951948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:08.984 BaseBdev2 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.984 spare_malloc 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.984 13:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.984 spare_delay 00:20:08.984 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.985 [2024-10-01 13:54:19.014387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:08.985 [2024-10-01 13:54:19.014472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.985 [2024-10-01 13:54:19.014497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:08.985 [2024-10-01 13:54:19.014512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.985 [2024-10-01 13:54:19.016940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.985 [2024-10-01 13:54:19.017107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:08.985 spare 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.985 [2024-10-01 13:54:19.026413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.985 [2024-10-01 13:54:19.028496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.985 [2024-10-01 13:54:19.028672] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:08.985 [2024-10-01 13:54:19.028689] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:08.985 [2024-10-01 13:54:19.028981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:08.985 [2024-10-01 13:54:19.029147] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:08.985 [2024-10-01 13:54:19.029157] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:08.985 [2024-10-01 13:54:19.029333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.985 "name": "raid_bdev1", 00:20:08.985 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:08.985 "strip_size_kb": 0, 00:20:08.985 "state": "online", 00:20:08.985 "raid_level": "raid1", 00:20:08.985 "superblock": true, 00:20:08.985 "num_base_bdevs": 2, 00:20:08.985 "num_base_bdevs_discovered": 2, 00:20:08.985 "num_base_bdevs_operational": 2, 00:20:08.985 "base_bdevs_list": [ 00:20:08.985 { 00:20:08.985 "name": "BaseBdev1", 00:20:08.985 "uuid": "07234338-401c-5f2c-b7c4-e1d3b4794157", 00:20:08.985 "is_configured": true, 00:20:08.985 "data_offset": 256, 00:20:08.985 "data_size": 7936 00:20:08.985 }, 00:20:08.985 { 00:20:08.985 "name": "BaseBdev2", 00:20:08.985 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:08.985 "is_configured": true, 00:20:08.985 "data_offset": 256, 00:20:08.985 "data_size": 7936 00:20:08.985 } 00:20:08.985 ] 00:20:08.985 }' 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.985 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:09.553 [2024-10-01 13:54:19.450055] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.553 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:09.812 [2024-10-01 13:54:19.781386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:09.812 /dev/nbd0 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.812 1+0 records in 00:20:09.812 1+0 records out 00:20:09.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428438 s, 9.6 MB/s 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:09.812 13:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:10.775 7936+0 records in 00:20:10.775 7936+0 records out 00:20:10.775 32505856 bytes (33 MB, 31 MiB) copied, 0.768675 s, 42.3 MB/s 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:10.775 [2024-10-01 13:54:20.900888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.775 [2024-10-01 13:54:20.912975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.775 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.034 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.034 "name": "raid_bdev1", 00:20:11.034 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:11.034 "strip_size_kb": 0, 00:20:11.034 "state": "online", 00:20:11.034 "raid_level": "raid1", 00:20:11.034 "superblock": true, 00:20:11.034 "num_base_bdevs": 2, 00:20:11.034 "num_base_bdevs_discovered": 1, 00:20:11.034 "num_base_bdevs_operational": 1, 00:20:11.034 "base_bdevs_list": [ 00:20:11.034 { 00:20:11.034 "name": null, 00:20:11.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.034 "is_configured": false, 00:20:11.034 "data_offset": 0, 00:20:11.034 "data_size": 7936 00:20:11.034 }, 00:20:11.034 { 00:20:11.034 "name": "BaseBdev2", 00:20:11.034 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:11.034 "is_configured": true, 00:20:11.034 "data_offset": 256, 00:20:11.034 "data_size": 7936 00:20:11.034 } 00:20:11.034 ] 00:20:11.034 }' 00:20:11.034 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.034 13:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.292 13:54:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:11.292 13:54:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.292 13:54:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.292 [2024-10-01 13:54:21.304473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:11.292 [2024-10-01 13:54:21.318650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:11.292 13:54:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.292 13:54:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:11.292 [2024-10-01 13:54:21.320864] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.228 "name": "raid_bdev1", 00:20:12.228 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:12.228 "strip_size_kb": 0, 00:20:12.228 "state": "online", 00:20:12.228 "raid_level": "raid1", 00:20:12.228 "superblock": true, 00:20:12.228 "num_base_bdevs": 2, 00:20:12.228 "num_base_bdevs_discovered": 2, 00:20:12.228 "num_base_bdevs_operational": 2, 00:20:12.228 "process": { 00:20:12.228 "type": "rebuild", 00:20:12.228 "target": "spare", 00:20:12.228 "progress": { 00:20:12.228 "blocks": 2560, 00:20:12.228 "percent": 32 00:20:12.228 } 00:20:12.228 }, 00:20:12.228 "base_bdevs_list": [ 00:20:12.228 { 00:20:12.228 "name": "spare", 00:20:12.228 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:12.228 "is_configured": true, 00:20:12.228 "data_offset": 256, 00:20:12.228 "data_size": 7936 00:20:12.228 }, 00:20:12.228 { 00:20:12.228 "name": "BaseBdev2", 00:20:12.228 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:12.228 "is_configured": true, 00:20:12.228 "data_offset": 256, 00:20:12.228 "data_size": 7936 00:20:12.228 } 00:20:12.228 ] 00:20:12.228 }' 00:20:12.228 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.487 [2024-10-01 13:54:22.468812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:12.487 [2024-10-01 13:54:22.527083] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:12.487 [2024-10-01 13:54:22.527203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.487 [2024-10-01 13:54:22.527221] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:12.487 [2024-10-01 13:54:22.527233] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.487 "name": "raid_bdev1", 00:20:12.487 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:12.487 "strip_size_kb": 0, 00:20:12.487 "state": "online", 00:20:12.487 "raid_level": "raid1", 00:20:12.487 "superblock": true, 00:20:12.487 "num_base_bdevs": 2, 00:20:12.487 "num_base_bdevs_discovered": 1, 00:20:12.487 "num_base_bdevs_operational": 1, 00:20:12.487 "base_bdevs_list": [ 00:20:12.487 { 00:20:12.487 "name": null, 00:20:12.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.487 "is_configured": false, 00:20:12.487 "data_offset": 0, 00:20:12.487 "data_size": 7936 00:20:12.487 }, 00:20:12.487 { 00:20:12.487 "name": "BaseBdev2", 00:20:12.487 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:12.487 "is_configured": true, 00:20:12.487 "data_offset": 256, 00:20:12.487 "data_size": 7936 00:20:12.487 } 00:20:12.487 ] 00:20:12.487 }' 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.487 13:54:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.057 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.057 "name": "raid_bdev1", 00:20:13.057 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:13.057 "strip_size_kb": 0, 00:20:13.057 "state": "online", 00:20:13.057 "raid_level": "raid1", 00:20:13.057 "superblock": true, 00:20:13.057 "num_base_bdevs": 2, 00:20:13.057 "num_base_bdevs_discovered": 1, 00:20:13.057 "num_base_bdevs_operational": 1, 00:20:13.057 "base_bdevs_list": [ 00:20:13.057 { 00:20:13.057 "name": null, 00:20:13.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.057 "is_configured": false, 00:20:13.057 "data_offset": 0, 00:20:13.057 "data_size": 7936 00:20:13.057 }, 00:20:13.057 { 00:20:13.057 "name": "BaseBdev2", 00:20:13.057 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:13.058 "is_configured": true, 00:20:13.058 "data_offset": 256, 00:20:13.058 "data_size": 7936 00:20:13.058 } 00:20:13.058 ] 00:20:13.058 }' 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.058 [2024-10-01 13:54:23.184780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.058 [2024-10-01 13:54:23.199861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.058 13:54:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:13.058 [2024-10-01 13:54:23.201988] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:14.435 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.435 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.436 "name": "raid_bdev1", 00:20:14.436 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:14.436 "strip_size_kb": 0, 00:20:14.436 "state": "online", 00:20:14.436 "raid_level": "raid1", 00:20:14.436 "superblock": true, 00:20:14.436 "num_base_bdevs": 2, 00:20:14.436 "num_base_bdevs_discovered": 2, 00:20:14.436 "num_base_bdevs_operational": 2, 00:20:14.436 "process": { 00:20:14.436 "type": "rebuild", 00:20:14.436 "target": "spare", 00:20:14.436 "progress": { 00:20:14.436 "blocks": 2560, 00:20:14.436 "percent": 32 00:20:14.436 } 00:20:14.436 }, 00:20:14.436 "base_bdevs_list": [ 00:20:14.436 { 00:20:14.436 "name": "spare", 00:20:14.436 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:14.436 "is_configured": true, 00:20:14.436 "data_offset": 256, 00:20:14.436 "data_size": 7936 00:20:14.436 }, 00:20:14.436 { 00:20:14.436 "name": "BaseBdev2", 00:20:14.436 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:14.436 "is_configured": true, 00:20:14.436 "data_offset": 256, 00:20:14.436 "data_size": 7936 00:20:14.436 } 00:20:14.436 ] 00:20:14.436 }' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:14.436 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=699 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.436 "name": "raid_bdev1", 00:20:14.436 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:14.436 "strip_size_kb": 0, 00:20:14.436 "state": "online", 00:20:14.436 "raid_level": "raid1", 00:20:14.436 "superblock": true, 00:20:14.436 "num_base_bdevs": 2, 00:20:14.436 "num_base_bdevs_discovered": 2, 00:20:14.436 "num_base_bdevs_operational": 2, 00:20:14.436 "process": { 00:20:14.436 "type": "rebuild", 00:20:14.436 "target": "spare", 00:20:14.436 "progress": { 00:20:14.436 "blocks": 2816, 00:20:14.436 "percent": 35 00:20:14.436 } 00:20:14.436 }, 00:20:14.436 "base_bdevs_list": [ 00:20:14.436 { 00:20:14.436 "name": "spare", 00:20:14.436 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:14.436 "is_configured": true, 00:20:14.436 "data_offset": 256, 00:20:14.436 "data_size": 7936 00:20:14.436 }, 00:20:14.436 { 00:20:14.436 "name": "BaseBdev2", 00:20:14.436 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:14.436 "is_configured": true, 00:20:14.436 "data_offset": 256, 00:20:14.436 "data_size": 7936 00:20:14.436 } 00:20:14.436 ] 00:20:14.436 }' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.436 13:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:15.373 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.374 "name": "raid_bdev1", 00:20:15.374 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:15.374 "strip_size_kb": 0, 00:20:15.374 "state": "online", 00:20:15.374 "raid_level": "raid1", 00:20:15.374 "superblock": true, 00:20:15.374 "num_base_bdevs": 2, 00:20:15.374 "num_base_bdevs_discovered": 2, 00:20:15.374 "num_base_bdevs_operational": 2, 00:20:15.374 "process": { 00:20:15.374 "type": "rebuild", 00:20:15.374 "target": "spare", 00:20:15.374 "progress": { 00:20:15.374 "blocks": 5888, 00:20:15.374 "percent": 74 00:20:15.374 } 00:20:15.374 }, 00:20:15.374 "base_bdevs_list": [ 00:20:15.374 { 00:20:15.374 "name": "spare", 00:20:15.374 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:15.374 "is_configured": true, 00:20:15.374 "data_offset": 256, 00:20:15.374 "data_size": 7936 00:20:15.374 }, 00:20:15.374 { 00:20:15.374 "name": "BaseBdev2", 00:20:15.374 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:15.374 "is_configured": true, 00:20:15.374 "data_offset": 256, 00:20:15.374 "data_size": 7936 00:20:15.374 } 00:20:15.374 ] 00:20:15.374 }' 00:20:15.374 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.631 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.631 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.631 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.631 13:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.234 [2024-10-01 13:54:26.319598] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:16.234 [2024-10-01 13:54:26.319701] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:16.234 [2024-10-01 13:54:26.319832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.491 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.750 "name": "raid_bdev1", 00:20:16.750 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:16.750 "strip_size_kb": 0, 00:20:16.750 "state": "online", 00:20:16.750 "raid_level": "raid1", 00:20:16.750 "superblock": true, 00:20:16.750 "num_base_bdevs": 2, 00:20:16.750 "num_base_bdevs_discovered": 2, 00:20:16.750 "num_base_bdevs_operational": 2, 00:20:16.750 "base_bdevs_list": [ 00:20:16.750 { 00:20:16.750 "name": "spare", 00:20:16.750 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:16.750 "is_configured": true, 00:20:16.750 "data_offset": 256, 00:20:16.750 "data_size": 7936 00:20:16.750 }, 00:20:16.750 { 00:20:16.750 "name": "BaseBdev2", 00:20:16.750 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:16.750 "is_configured": true, 00:20:16.750 "data_offset": 256, 00:20:16.750 "data_size": 7936 00:20:16.750 } 00:20:16.750 ] 00:20:16.750 }' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.750 "name": "raid_bdev1", 00:20:16.750 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:16.750 "strip_size_kb": 0, 00:20:16.750 "state": "online", 00:20:16.750 "raid_level": "raid1", 00:20:16.750 "superblock": true, 00:20:16.750 "num_base_bdevs": 2, 00:20:16.750 "num_base_bdevs_discovered": 2, 00:20:16.750 "num_base_bdevs_operational": 2, 00:20:16.750 "base_bdevs_list": [ 00:20:16.750 { 00:20:16.750 "name": "spare", 00:20:16.750 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:16.750 "is_configured": true, 00:20:16.750 "data_offset": 256, 00:20:16.750 "data_size": 7936 00:20:16.750 }, 00:20:16.750 { 00:20:16.750 "name": "BaseBdev2", 00:20:16.750 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:16.750 "is_configured": true, 00:20:16.750 "data_offset": 256, 00:20:16.750 "data_size": 7936 00:20:16.750 } 00:20:16.750 ] 00:20:16.750 }' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.750 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.008 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.008 "name": "raid_bdev1", 00:20:17.008 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:17.008 "strip_size_kb": 0, 00:20:17.008 "state": "online", 00:20:17.008 "raid_level": "raid1", 00:20:17.008 "superblock": true, 00:20:17.008 "num_base_bdevs": 2, 00:20:17.008 "num_base_bdevs_discovered": 2, 00:20:17.008 "num_base_bdevs_operational": 2, 00:20:17.008 "base_bdevs_list": [ 00:20:17.008 { 00:20:17.008 "name": "spare", 00:20:17.008 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:17.008 "is_configured": true, 00:20:17.008 "data_offset": 256, 00:20:17.008 "data_size": 7936 00:20:17.008 }, 00:20:17.008 { 00:20:17.008 "name": "BaseBdev2", 00:20:17.008 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:17.008 "is_configured": true, 00:20:17.008 "data_offset": 256, 00:20:17.008 "data_size": 7936 00:20:17.008 } 00:20:17.008 ] 00:20:17.008 }' 00:20:17.008 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.008 13:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.265 [2024-10-01 13:54:27.336100] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.265 [2024-10-01 13:54:27.336285] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.265 [2024-10-01 13:54:27.336416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.265 [2024-10-01 13:54:27.336492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.265 [2024-10-01 13:54:27.336505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:17.265 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:17.522 /dev/nbd0 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.522 1+0 records in 00:20:17.522 1+0 records out 00:20:17.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388663 s, 10.5 MB/s 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:17.522 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:17.779 /dev/nbd1 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.779 1+0 records in 00:20:17.779 1+0 records out 00:20:17.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476957 s, 8.6 MB/s 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:17.779 13:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:18.037 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:18.037 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:18.037 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:18.037 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:18.037 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:18.037 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:18.037 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:18.293 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.552 [2024-10-01 13:54:28.598379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:18.552 [2024-10-01 13:54:28.598460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.552 [2024-10-01 13:54:28.598505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:18.552 [2024-10-01 13:54:28.598519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.552 [2024-10-01 13:54:28.601254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.552 [2024-10-01 13:54:28.601444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:18.552 [2024-10-01 13:54:28.601568] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:18.552 [2024-10-01 13:54:28.601639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.552 [2024-10-01 13:54:28.601821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.552 spare 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.552 [2024-10-01 13:54:28.701761] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:18.552 [2024-10-01 13:54:28.701827] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:18.552 [2024-10-01 13:54:28.702191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:18.552 [2024-10-01 13:54:28.702429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:18.552 [2024-10-01 13:54:28.702444] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:18.552 [2024-10-01 13:54:28.702685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.552 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.810 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.810 "name": "raid_bdev1", 00:20:18.810 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:18.810 "strip_size_kb": 0, 00:20:18.810 "state": "online", 00:20:18.810 "raid_level": "raid1", 00:20:18.810 "superblock": true, 00:20:18.810 "num_base_bdevs": 2, 00:20:18.810 "num_base_bdevs_discovered": 2, 00:20:18.810 "num_base_bdevs_operational": 2, 00:20:18.810 "base_bdevs_list": [ 00:20:18.810 { 00:20:18.810 "name": "spare", 00:20:18.810 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:18.810 "is_configured": true, 00:20:18.810 "data_offset": 256, 00:20:18.810 "data_size": 7936 00:20:18.810 }, 00:20:18.810 { 00:20:18.810 "name": "BaseBdev2", 00:20:18.810 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:18.810 "is_configured": true, 00:20:18.810 "data_offset": 256, 00:20:18.810 "data_size": 7936 00:20:18.810 } 00:20:18.810 ] 00:20:18.810 }' 00:20:18.810 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.810 13:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.067 "name": "raid_bdev1", 00:20:19.067 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:19.067 "strip_size_kb": 0, 00:20:19.067 "state": "online", 00:20:19.067 "raid_level": "raid1", 00:20:19.067 "superblock": true, 00:20:19.067 "num_base_bdevs": 2, 00:20:19.067 "num_base_bdevs_discovered": 2, 00:20:19.067 "num_base_bdevs_operational": 2, 00:20:19.067 "base_bdevs_list": [ 00:20:19.067 { 00:20:19.067 "name": "spare", 00:20:19.067 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:19.067 "is_configured": true, 00:20:19.067 "data_offset": 256, 00:20:19.067 "data_size": 7936 00:20:19.067 }, 00:20:19.067 { 00:20:19.067 "name": "BaseBdev2", 00:20:19.067 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:19.067 "is_configured": true, 00:20:19.067 "data_offset": 256, 00:20:19.067 "data_size": 7936 00:20:19.067 } 00:20:19.067 ] 00:20:19.067 }' 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.067 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 [2024-10-01 13:54:29.353941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.324 "name": "raid_bdev1", 00:20:19.324 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:19.324 "strip_size_kb": 0, 00:20:19.324 "state": "online", 00:20:19.324 "raid_level": "raid1", 00:20:19.324 "superblock": true, 00:20:19.324 "num_base_bdevs": 2, 00:20:19.324 "num_base_bdevs_discovered": 1, 00:20:19.324 "num_base_bdevs_operational": 1, 00:20:19.324 "base_bdevs_list": [ 00:20:19.324 { 00:20:19.324 "name": null, 00:20:19.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.324 "is_configured": false, 00:20:19.324 "data_offset": 0, 00:20:19.324 "data_size": 7936 00:20:19.324 }, 00:20:19.324 { 00:20:19.324 "name": "BaseBdev2", 00:20:19.324 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:19.324 "is_configured": true, 00:20:19.324 "data_offset": 256, 00:20:19.324 "data_size": 7936 00:20:19.324 } 00:20:19.324 ] 00:20:19.324 }' 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.324 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.891 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.891 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 [2024-10-01 13:54:29.809309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.891 [2024-10-01 13:54:29.809539] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.891 [2024-10-01 13:54:29.809564] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:19.891 [2024-10-01 13:54:29.809638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.891 [2024-10-01 13:54:29.826345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:19.891 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.891 13:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:19.891 [2024-10-01 13:54:29.828676] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.827 "name": "raid_bdev1", 00:20:20.827 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:20.827 "strip_size_kb": 0, 00:20:20.827 "state": "online", 00:20:20.827 "raid_level": "raid1", 00:20:20.827 "superblock": true, 00:20:20.827 "num_base_bdevs": 2, 00:20:20.827 "num_base_bdevs_discovered": 2, 00:20:20.827 "num_base_bdevs_operational": 2, 00:20:20.827 "process": { 00:20:20.827 "type": "rebuild", 00:20:20.827 "target": "spare", 00:20:20.827 "progress": { 00:20:20.827 "blocks": 2560, 00:20:20.827 "percent": 32 00:20:20.827 } 00:20:20.827 }, 00:20:20.827 "base_bdevs_list": [ 00:20:20.827 { 00:20:20.827 "name": "spare", 00:20:20.827 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:20.827 "is_configured": true, 00:20:20.827 "data_offset": 256, 00:20:20.827 "data_size": 7936 00:20:20.827 }, 00:20:20.827 { 00:20:20.827 "name": "BaseBdev2", 00:20:20.827 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:20.827 "is_configured": true, 00:20:20.827 "data_offset": 256, 00:20:20.827 "data_size": 7936 00:20:20.827 } 00:20:20.827 ] 00:20:20.827 }' 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.827 13:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.827 [2024-10-01 13:54:30.995084] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.086 [2024-10-01 13:54:31.035567] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:21.086 [2024-10-01 13:54:31.035664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.086 [2024-10-01 13:54:31.035682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.086 [2024-10-01 13:54:31.035695] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.086 "name": "raid_bdev1", 00:20:21.086 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:21.086 "strip_size_kb": 0, 00:20:21.086 "state": "online", 00:20:21.086 "raid_level": "raid1", 00:20:21.086 "superblock": true, 00:20:21.086 "num_base_bdevs": 2, 00:20:21.086 "num_base_bdevs_discovered": 1, 00:20:21.086 "num_base_bdevs_operational": 1, 00:20:21.086 "base_bdevs_list": [ 00:20:21.086 { 00:20:21.086 "name": null, 00:20:21.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.086 "is_configured": false, 00:20:21.086 "data_offset": 0, 00:20:21.086 "data_size": 7936 00:20:21.086 }, 00:20:21.086 { 00:20:21.086 "name": "BaseBdev2", 00:20:21.086 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:21.086 "is_configured": true, 00:20:21.086 "data_offset": 256, 00:20:21.086 "data_size": 7936 00:20:21.086 } 00:20:21.086 ] 00:20:21.086 }' 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.086 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.652 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:21.652 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.652 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.652 [2024-10-01 13:54:31.641765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.652 [2024-10-01 13:54:31.641877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.652 [2024-10-01 13:54:31.641903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:21.652 [2024-10-01 13:54:31.641918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.652 [2024-10-01 13:54:31.642525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.652 [2024-10-01 13:54:31.642569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.652 [2024-10-01 13:54:31.642675] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:21.652 [2024-10-01 13:54:31.642693] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:21.652 [2024-10-01 13:54:31.642707] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:21.652 [2024-10-01 13:54:31.642733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.652 [2024-10-01 13:54:31.659176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:21.652 spare 00:20:21.652 [2024-10-01 13:54:31.661526] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.652 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.652 13:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.586 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.586 "name": "raid_bdev1", 00:20:22.586 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:22.586 "strip_size_kb": 0, 00:20:22.586 "state": "online", 00:20:22.586 "raid_level": "raid1", 00:20:22.586 "superblock": true, 00:20:22.586 "num_base_bdevs": 2, 00:20:22.586 "num_base_bdevs_discovered": 2, 00:20:22.586 "num_base_bdevs_operational": 2, 00:20:22.586 "process": { 00:20:22.586 "type": "rebuild", 00:20:22.586 "target": "spare", 00:20:22.586 "progress": { 00:20:22.586 "blocks": 2560, 00:20:22.586 "percent": 32 00:20:22.586 } 00:20:22.586 }, 00:20:22.586 "base_bdevs_list": [ 00:20:22.586 { 00:20:22.586 "name": "spare", 00:20:22.586 "uuid": "b92b8f37-53c2-58ee-91d2-acb63faa82bf", 00:20:22.586 "is_configured": true, 00:20:22.586 "data_offset": 256, 00:20:22.586 "data_size": 7936 00:20:22.586 }, 00:20:22.586 { 00:20:22.586 "name": "BaseBdev2", 00:20:22.586 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:22.586 "is_configured": true, 00:20:22.587 "data_offset": 256, 00:20:22.587 "data_size": 7936 00:20:22.587 } 00:20:22.587 ] 00:20:22.587 }' 00:20:22.587 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.587 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.587 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.844 [2024-10-01 13:54:32.792746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.844 [2024-10-01 13:54:32.869123] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:22.844 [2024-10-01 13:54:32.869244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.844 [2024-10-01 13:54:32.869267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.844 [2024-10-01 13:54:32.869276] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.844 "name": "raid_bdev1", 00:20:22.844 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:22.844 "strip_size_kb": 0, 00:20:22.844 "state": "online", 00:20:22.844 "raid_level": "raid1", 00:20:22.844 "superblock": true, 00:20:22.844 "num_base_bdevs": 2, 00:20:22.844 "num_base_bdevs_discovered": 1, 00:20:22.844 "num_base_bdevs_operational": 1, 00:20:22.844 "base_bdevs_list": [ 00:20:22.844 { 00:20:22.844 "name": null, 00:20:22.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.844 "is_configured": false, 00:20:22.844 "data_offset": 0, 00:20:22.844 "data_size": 7936 00:20:22.844 }, 00:20:22.844 { 00:20:22.844 "name": "BaseBdev2", 00:20:22.844 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:22.844 "is_configured": true, 00:20:22.844 "data_offset": 256, 00:20:22.844 "data_size": 7936 00:20:22.844 } 00:20:22.844 ] 00:20:22.844 }' 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.844 13:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.408 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.408 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.408 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.408 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.408 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.408 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.409 "name": "raid_bdev1", 00:20:23.409 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:23.409 "strip_size_kb": 0, 00:20:23.409 "state": "online", 00:20:23.409 "raid_level": "raid1", 00:20:23.409 "superblock": true, 00:20:23.409 "num_base_bdevs": 2, 00:20:23.409 "num_base_bdevs_discovered": 1, 00:20:23.409 "num_base_bdevs_operational": 1, 00:20:23.409 "base_bdevs_list": [ 00:20:23.409 { 00:20:23.409 "name": null, 00:20:23.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.409 "is_configured": false, 00:20:23.409 "data_offset": 0, 00:20:23.409 "data_size": 7936 00:20:23.409 }, 00:20:23.409 { 00:20:23.409 "name": "BaseBdev2", 00:20:23.409 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:23.409 "is_configured": true, 00:20:23.409 "data_offset": 256, 00:20:23.409 "data_size": 7936 00:20:23.409 } 00:20:23.409 ] 00:20:23.409 }' 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.409 [2024-10-01 13:54:33.495602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:23.409 [2024-10-01 13:54:33.495667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.409 [2024-10-01 13:54:33.495694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:23.409 [2024-10-01 13:54:33.495707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.409 [2024-10-01 13:54:33.496225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.409 [2024-10-01 13:54:33.496253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:23.409 [2024-10-01 13:54:33.496346] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:23.409 [2024-10-01 13:54:33.496362] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:23.409 [2024-10-01 13:54:33.496380] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:23.409 [2024-10-01 13:54:33.496392] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:23.409 BaseBdev1 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.409 13:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.343 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.602 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.602 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.602 "name": "raid_bdev1", 00:20:24.602 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:24.602 "strip_size_kb": 0, 00:20:24.602 "state": "online", 00:20:24.602 "raid_level": "raid1", 00:20:24.602 "superblock": true, 00:20:24.602 "num_base_bdevs": 2, 00:20:24.602 "num_base_bdevs_discovered": 1, 00:20:24.602 "num_base_bdevs_operational": 1, 00:20:24.602 "base_bdevs_list": [ 00:20:24.602 { 00:20:24.602 "name": null, 00:20:24.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.602 "is_configured": false, 00:20:24.602 "data_offset": 0, 00:20:24.602 "data_size": 7936 00:20:24.602 }, 00:20:24.602 { 00:20:24.602 "name": "BaseBdev2", 00:20:24.602 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:24.602 "is_configured": true, 00:20:24.602 "data_offset": 256, 00:20:24.602 "data_size": 7936 00:20:24.602 } 00:20:24.602 ] 00:20:24.602 }' 00:20:24.602 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.602 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.861 13:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.861 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.861 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.861 "name": "raid_bdev1", 00:20:24.861 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:24.861 "strip_size_kb": 0, 00:20:24.861 "state": "online", 00:20:24.861 "raid_level": "raid1", 00:20:24.861 "superblock": true, 00:20:24.861 "num_base_bdevs": 2, 00:20:24.861 "num_base_bdevs_discovered": 1, 00:20:24.861 "num_base_bdevs_operational": 1, 00:20:24.861 "base_bdevs_list": [ 00:20:24.861 { 00:20:24.861 "name": null, 00:20:24.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.861 "is_configured": false, 00:20:24.861 "data_offset": 0, 00:20:24.861 "data_size": 7936 00:20:24.861 }, 00:20:24.861 { 00:20:24.861 "name": "BaseBdev2", 00:20:24.861 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:24.861 "is_configured": true, 00:20:24.861 "data_offset": 256, 00:20:24.861 "data_size": 7936 00:20:24.861 } 00:20:24.861 ] 00:20:24.861 }' 00:20:24.861 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.146 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.146 [2024-10-01 13:54:35.143701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.146 [2024-10-01 13:54:35.143890] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:25.147 [2024-10-01 13:54:35.143908] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:25.147 request: 00:20:25.147 { 00:20:25.147 "base_bdev": "BaseBdev1", 00:20:25.147 "raid_bdev": "raid_bdev1", 00:20:25.147 "method": "bdev_raid_add_base_bdev", 00:20:25.147 "req_id": 1 00:20:25.147 } 00:20:25.147 Got JSON-RPC error response 00:20:25.147 response: 00:20:25.147 { 00:20:25.147 "code": -22, 00:20:25.147 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:25.147 } 00:20:25.147 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:25.147 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:20:25.147 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.147 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.147 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.147 13:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.082 "name": "raid_bdev1", 00:20:26.082 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:26.082 "strip_size_kb": 0, 00:20:26.082 "state": "online", 00:20:26.082 "raid_level": "raid1", 00:20:26.082 "superblock": true, 00:20:26.082 "num_base_bdevs": 2, 00:20:26.082 "num_base_bdevs_discovered": 1, 00:20:26.082 "num_base_bdevs_operational": 1, 00:20:26.082 "base_bdevs_list": [ 00:20:26.082 { 00:20:26.082 "name": null, 00:20:26.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.082 "is_configured": false, 00:20:26.082 "data_offset": 0, 00:20:26.082 "data_size": 7936 00:20:26.082 }, 00:20:26.082 { 00:20:26.082 "name": "BaseBdev2", 00:20:26.082 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:26.082 "is_configured": true, 00:20:26.082 "data_offset": 256, 00:20:26.082 "data_size": 7936 00:20:26.082 } 00:20:26.082 ] 00:20:26.082 }' 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.082 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.651 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.651 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.651 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.651 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.651 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.651 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.651 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.652 "name": "raid_bdev1", 00:20:26.652 "uuid": "83443961-6c7d-49ef-a3fb-67049caa618f", 00:20:26.652 "strip_size_kb": 0, 00:20:26.652 "state": "online", 00:20:26.652 "raid_level": "raid1", 00:20:26.652 "superblock": true, 00:20:26.652 "num_base_bdevs": 2, 00:20:26.652 "num_base_bdevs_discovered": 1, 00:20:26.652 "num_base_bdevs_operational": 1, 00:20:26.652 "base_bdevs_list": [ 00:20:26.652 { 00:20:26.652 "name": null, 00:20:26.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.652 "is_configured": false, 00:20:26.652 "data_offset": 0, 00:20:26.652 "data_size": 7936 00:20:26.652 }, 00:20:26.652 { 00:20:26.652 "name": "BaseBdev2", 00:20:26.652 "uuid": "9fbae241-82c5-53ab-ae5c-873f11a6468e", 00:20:26.652 "is_configured": true, 00:20:26.652 "data_offset": 256, 00:20:26.652 "data_size": 7936 00:20:26.652 } 00:20:26.652 ] 00:20:26.652 }' 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86572 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86572 ']' 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86572 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86572 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.652 killing process with pid 86572 00:20:26.652 Received shutdown signal, test time was about 60.000000 seconds 00:20:26.652 00:20:26.652 Latency(us) 00:20:26.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.652 =================================================================================================================== 00:20:26.652 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86572' 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86572 00:20:26.652 13:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86572 00:20:26.652 [2024-10-01 13:54:36.706719] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.652 [2024-10-01 13:54:36.706861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.652 [2024-10-01 13:54:36.706916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.652 [2024-10-01 13:54:36.706945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:26.910 [2024-10-01 13:54:37.027634] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.287 13:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:28.287 00:20:28.287 real 0m20.632s 00:20:28.287 user 0m26.816s 00:20:28.287 sys 0m3.178s 00:20:28.287 13:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.287 13:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.287 ************************************ 00:20:28.287 END TEST raid_rebuild_test_sb_4k 00:20:28.287 ************************************ 00:20:28.287 13:54:38 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:28.287 13:54:38 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:28.287 13:54:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:28.287 13:54:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.287 13:54:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.287 ************************************ 00:20:28.287 START TEST raid_state_function_test_sb_md_separate 00:20:28.287 ************************************ 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87269 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:28.287 Process raid pid: 87269 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87269' 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87269 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87269 ']' 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.287 13:54:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.557 [2024-10-01 13:54:38.522976] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:28.557 [2024-10-01 13:54:38.523589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.557 [2024-10-01 13:54:38.699861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.854 [2024-10-01 13:54:38.937047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.112 [2024-10-01 13:54:39.165307] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.112 [2024-10-01 13:54:39.165352] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 [2024-10-01 13:54:39.407699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.373 [2024-10-01 13:54:39.407755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.373 [2024-10-01 13:54:39.407770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.373 [2024-10-01 13:54:39.407800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.373 "name": "Existed_Raid", 00:20:29.373 "uuid": "05a1b942-749c-4a0a-b0bd-1bbe7ec25b1d", 00:20:29.373 "strip_size_kb": 0, 00:20:29.373 "state": "configuring", 00:20:29.373 "raid_level": "raid1", 00:20:29.373 "superblock": true, 00:20:29.373 "num_base_bdevs": 2, 00:20:29.373 "num_base_bdevs_discovered": 0, 00:20:29.373 "num_base_bdevs_operational": 2, 00:20:29.373 "base_bdevs_list": [ 00:20:29.373 { 00:20:29.373 "name": "BaseBdev1", 00:20:29.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.373 "is_configured": false, 00:20:29.373 "data_offset": 0, 00:20:29.373 "data_size": 0 00:20:29.373 }, 00:20:29.373 { 00:20:29.373 "name": "BaseBdev2", 00:20:29.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.373 "is_configured": false, 00:20:29.373 "data_offset": 0, 00:20:29.373 "data_size": 0 00:20:29.373 } 00:20:29.373 ] 00:20:29.373 }' 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.373 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.646 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:29.646 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.646 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 [2024-10-01 13:54:39.839593] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.906 [2024-10-01 13:54:39.839636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 [2024-10-01 13:54:39.851608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.906 [2024-10-01 13:54:39.851655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.906 [2024-10-01 13:54:39.851666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.906 [2024-10-01 13:54:39.851682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 [2024-10-01 13:54:39.919518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.906 BaseBdev1 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 [ 00:20:29.906 { 00:20:29.906 "name": "BaseBdev1", 00:20:29.906 "aliases": [ 00:20:29.906 "f049e1a2-013b-4f19-a1be-93100c83064c" 00:20:29.906 ], 00:20:29.906 "product_name": "Malloc disk", 00:20:29.906 "block_size": 4096, 00:20:29.906 "num_blocks": 8192, 00:20:29.906 "uuid": "f049e1a2-013b-4f19-a1be-93100c83064c", 00:20:29.906 "md_size": 32, 00:20:29.906 "md_interleave": false, 00:20:29.906 "dif_type": 0, 00:20:29.906 "assigned_rate_limits": { 00:20:29.906 "rw_ios_per_sec": 0, 00:20:29.906 "rw_mbytes_per_sec": 0, 00:20:29.906 "r_mbytes_per_sec": 0, 00:20:29.906 "w_mbytes_per_sec": 0 00:20:29.906 }, 00:20:29.906 "claimed": true, 00:20:29.906 "claim_type": "exclusive_write", 00:20:29.906 "zoned": false, 00:20:29.906 "supported_io_types": { 00:20:29.906 "read": true, 00:20:29.906 "write": true, 00:20:29.906 "unmap": true, 00:20:29.906 "flush": true, 00:20:29.906 "reset": true, 00:20:29.906 "nvme_admin": false, 00:20:29.906 "nvme_io": false, 00:20:29.906 "nvme_io_md": false, 00:20:29.906 "write_zeroes": true, 00:20:29.906 "zcopy": true, 00:20:29.906 "get_zone_info": false, 00:20:29.906 "zone_management": false, 00:20:29.906 "zone_append": false, 00:20:29.906 "compare": false, 00:20:29.906 "compare_and_write": false, 00:20:29.906 "abort": true, 00:20:29.906 "seek_hole": false, 00:20:29.906 "seek_data": false, 00:20:29.906 "copy": true, 00:20:29.906 "nvme_iov_md": false 00:20:29.906 }, 00:20:29.906 "memory_domains": [ 00:20:29.906 { 00:20:29.906 "dma_device_id": "system", 00:20:29.906 "dma_device_type": 1 00:20:29.906 }, 00:20:29.906 { 00:20:29.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.906 "dma_device_type": 2 00:20:29.906 } 00:20:29.906 ], 00:20:29.906 "driver_specific": {} 00:20:29.906 } 00:20:29.906 ] 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:29.906 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.907 13:54:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.907 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.907 "name": "Existed_Raid", 00:20:29.907 "uuid": "20b39774-531f-4ae9-9796-852b70085b03", 00:20:29.907 "strip_size_kb": 0, 00:20:29.907 "state": "configuring", 00:20:29.907 "raid_level": "raid1", 00:20:29.907 "superblock": true, 00:20:29.907 "num_base_bdevs": 2, 00:20:29.907 "num_base_bdevs_discovered": 1, 00:20:29.907 "num_base_bdevs_operational": 2, 00:20:29.907 "base_bdevs_list": [ 00:20:29.907 { 00:20:29.907 "name": "BaseBdev1", 00:20:29.907 "uuid": "f049e1a2-013b-4f19-a1be-93100c83064c", 00:20:29.907 "is_configured": true, 00:20:29.907 "data_offset": 256, 00:20:29.907 "data_size": 7936 00:20:29.907 }, 00:20:29.907 { 00:20:29.907 "name": "BaseBdev2", 00:20:29.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.907 "is_configured": false, 00:20:29.907 "data_offset": 0, 00:20:29.907 "data_size": 0 00:20:29.907 } 00:20:29.907 ] 00:20:29.907 }' 00:20:29.907 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.907 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.166 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:30.166 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.166 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.166 [2024-10-01 13:54:40.350968] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:30.166 [2024-10-01 13:54:40.351030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:30.166 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.166 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:30.166 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.166 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.425 [2024-10-01 13:54:40.363024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.425 [2024-10-01 13:54:40.365260] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.425 [2024-10-01 13:54:40.365309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.425 "name": "Existed_Raid", 00:20:30.425 "uuid": "a5ab2974-b0ed-407b-9fa9-1c988d240d44", 00:20:30.425 "strip_size_kb": 0, 00:20:30.425 "state": "configuring", 00:20:30.425 "raid_level": "raid1", 00:20:30.425 "superblock": true, 00:20:30.425 "num_base_bdevs": 2, 00:20:30.425 "num_base_bdevs_discovered": 1, 00:20:30.425 "num_base_bdevs_operational": 2, 00:20:30.425 "base_bdevs_list": [ 00:20:30.425 { 00:20:30.425 "name": "BaseBdev1", 00:20:30.425 "uuid": "f049e1a2-013b-4f19-a1be-93100c83064c", 00:20:30.425 "is_configured": true, 00:20:30.425 "data_offset": 256, 00:20:30.425 "data_size": 7936 00:20:30.425 }, 00:20:30.425 { 00:20:30.425 "name": "BaseBdev2", 00:20:30.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.425 "is_configured": false, 00:20:30.425 "data_offset": 0, 00:20:30.425 "data_size": 0 00:20:30.425 } 00:20:30.425 ] 00:20:30.425 }' 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.425 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.685 [2024-10-01 13:54:40.819906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.685 [2024-10-01 13:54:40.820160] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:30.685 [2024-10-01 13:54:40.820181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:30.685 [2024-10-01 13:54:40.820355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:30.685 [2024-10-01 13:54:40.820502] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:30.685 [2024-10-01 13:54:40.820517] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:30.685 [2024-10-01 13:54:40.820629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.685 BaseBdev2 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.685 [ 00:20:30.685 { 00:20:30.685 "name": "BaseBdev2", 00:20:30.685 "aliases": [ 00:20:30.685 "fdbef653-65f3-4f1f-8554-97ac4f04b087" 00:20:30.685 ], 00:20:30.685 "product_name": "Malloc disk", 00:20:30.685 "block_size": 4096, 00:20:30.685 "num_blocks": 8192, 00:20:30.685 "uuid": "fdbef653-65f3-4f1f-8554-97ac4f04b087", 00:20:30.685 "md_size": 32, 00:20:30.685 "md_interleave": false, 00:20:30.685 "dif_type": 0, 00:20:30.685 "assigned_rate_limits": { 00:20:30.685 "rw_ios_per_sec": 0, 00:20:30.685 "rw_mbytes_per_sec": 0, 00:20:30.685 "r_mbytes_per_sec": 0, 00:20:30.685 "w_mbytes_per_sec": 0 00:20:30.685 }, 00:20:30.685 "claimed": true, 00:20:30.685 "claim_type": "exclusive_write", 00:20:30.685 "zoned": false, 00:20:30.685 "supported_io_types": { 00:20:30.685 "read": true, 00:20:30.685 "write": true, 00:20:30.685 "unmap": true, 00:20:30.685 "flush": true, 00:20:30.685 "reset": true, 00:20:30.685 "nvme_admin": false, 00:20:30.685 "nvme_io": false, 00:20:30.685 "nvme_io_md": false, 00:20:30.685 "write_zeroes": true, 00:20:30.685 "zcopy": true, 00:20:30.685 "get_zone_info": false, 00:20:30.685 "zone_management": false, 00:20:30.685 "zone_append": false, 00:20:30.685 "compare": false, 00:20:30.685 "compare_and_write": false, 00:20:30.685 "abort": true, 00:20:30.685 "seek_hole": false, 00:20:30.685 "seek_data": false, 00:20:30.685 "copy": true, 00:20:30.685 "nvme_iov_md": false 00:20:30.685 }, 00:20:30.685 "memory_domains": [ 00:20:30.685 { 00:20:30.685 "dma_device_id": "system", 00:20:30.685 "dma_device_type": 1 00:20:30.685 }, 00:20:30.685 { 00:20:30.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.685 "dma_device_type": 2 00:20:30.685 } 00:20:30.685 ], 00:20:30.685 "driver_specific": {} 00:20:30.685 } 00:20:30.685 ] 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.685 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.944 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.944 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.944 "name": "Existed_Raid", 00:20:30.944 "uuid": "a5ab2974-b0ed-407b-9fa9-1c988d240d44", 00:20:30.944 "strip_size_kb": 0, 00:20:30.944 "state": "online", 00:20:30.944 "raid_level": "raid1", 00:20:30.944 "superblock": true, 00:20:30.944 "num_base_bdevs": 2, 00:20:30.944 "num_base_bdevs_discovered": 2, 00:20:30.944 "num_base_bdevs_operational": 2, 00:20:30.944 "base_bdevs_list": [ 00:20:30.944 { 00:20:30.944 "name": "BaseBdev1", 00:20:30.944 "uuid": "f049e1a2-013b-4f19-a1be-93100c83064c", 00:20:30.944 "is_configured": true, 00:20:30.944 "data_offset": 256, 00:20:30.944 "data_size": 7936 00:20:30.944 }, 00:20:30.944 { 00:20:30.944 "name": "BaseBdev2", 00:20:30.944 "uuid": "fdbef653-65f3-4f1f-8554-97ac4f04b087", 00:20:30.944 "is_configured": true, 00:20:30.944 "data_offset": 256, 00:20:30.944 "data_size": 7936 00:20:30.944 } 00:20:30.944 ] 00:20:30.944 }' 00:20:30.944 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.944 13:54:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.202 [2024-10-01 13:54:41.315926] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.202 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.202 "name": "Existed_Raid", 00:20:31.202 "aliases": [ 00:20:31.202 "a5ab2974-b0ed-407b-9fa9-1c988d240d44" 00:20:31.202 ], 00:20:31.202 "product_name": "Raid Volume", 00:20:31.202 "block_size": 4096, 00:20:31.202 "num_blocks": 7936, 00:20:31.202 "uuid": "a5ab2974-b0ed-407b-9fa9-1c988d240d44", 00:20:31.202 "md_size": 32, 00:20:31.202 "md_interleave": false, 00:20:31.202 "dif_type": 0, 00:20:31.202 "assigned_rate_limits": { 00:20:31.202 "rw_ios_per_sec": 0, 00:20:31.202 "rw_mbytes_per_sec": 0, 00:20:31.202 "r_mbytes_per_sec": 0, 00:20:31.202 "w_mbytes_per_sec": 0 00:20:31.202 }, 00:20:31.202 "claimed": false, 00:20:31.202 "zoned": false, 00:20:31.202 "supported_io_types": { 00:20:31.202 "read": true, 00:20:31.202 "write": true, 00:20:31.202 "unmap": false, 00:20:31.202 "flush": false, 00:20:31.202 "reset": true, 00:20:31.202 "nvme_admin": false, 00:20:31.202 "nvme_io": false, 00:20:31.202 "nvme_io_md": false, 00:20:31.202 "write_zeroes": true, 00:20:31.202 "zcopy": false, 00:20:31.202 "get_zone_info": false, 00:20:31.202 "zone_management": false, 00:20:31.202 "zone_append": false, 00:20:31.202 "compare": false, 00:20:31.202 "compare_and_write": false, 00:20:31.202 "abort": false, 00:20:31.202 "seek_hole": false, 00:20:31.202 "seek_data": false, 00:20:31.202 "copy": false, 00:20:31.202 "nvme_iov_md": false 00:20:31.202 }, 00:20:31.202 "memory_domains": [ 00:20:31.202 { 00:20:31.202 "dma_device_id": "system", 00:20:31.202 "dma_device_type": 1 00:20:31.202 }, 00:20:31.202 { 00:20:31.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.202 "dma_device_type": 2 00:20:31.202 }, 00:20:31.202 { 00:20:31.202 "dma_device_id": "system", 00:20:31.202 "dma_device_type": 1 00:20:31.202 }, 00:20:31.202 { 00:20:31.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.202 "dma_device_type": 2 00:20:31.202 } 00:20:31.202 ], 00:20:31.202 "driver_specific": { 00:20:31.202 "raid": { 00:20:31.202 "uuid": "a5ab2974-b0ed-407b-9fa9-1c988d240d44", 00:20:31.202 "strip_size_kb": 0, 00:20:31.202 "state": "online", 00:20:31.202 "raid_level": "raid1", 00:20:31.202 "superblock": true, 00:20:31.202 "num_base_bdevs": 2, 00:20:31.202 "num_base_bdevs_discovered": 2, 00:20:31.202 "num_base_bdevs_operational": 2, 00:20:31.202 "base_bdevs_list": [ 00:20:31.202 { 00:20:31.202 "name": "BaseBdev1", 00:20:31.202 "uuid": "f049e1a2-013b-4f19-a1be-93100c83064c", 00:20:31.202 "is_configured": true, 00:20:31.202 "data_offset": 256, 00:20:31.202 "data_size": 7936 00:20:31.202 }, 00:20:31.202 { 00:20:31.202 "name": "BaseBdev2", 00:20:31.202 "uuid": "fdbef653-65f3-4f1f-8554-97ac4f04b087", 00:20:31.202 "is_configured": true, 00:20:31.202 "data_offset": 256, 00:20:31.202 "data_size": 7936 00:20:31.202 } 00:20:31.202 ] 00:20:31.202 } 00:20:31.202 } 00:20:31.202 }' 00:20:31.203 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:31.462 BaseBdev2' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.462 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.462 [2024-10-01 13:54:41.547688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.721 "name": "Existed_Raid", 00:20:31.721 "uuid": "a5ab2974-b0ed-407b-9fa9-1c988d240d44", 00:20:31.721 "strip_size_kb": 0, 00:20:31.721 "state": "online", 00:20:31.721 "raid_level": "raid1", 00:20:31.721 "superblock": true, 00:20:31.721 "num_base_bdevs": 2, 00:20:31.721 "num_base_bdevs_discovered": 1, 00:20:31.721 "num_base_bdevs_operational": 1, 00:20:31.721 "base_bdevs_list": [ 00:20:31.721 { 00:20:31.721 "name": null, 00:20:31.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.721 "is_configured": false, 00:20:31.721 "data_offset": 0, 00:20:31.721 "data_size": 7936 00:20:31.721 }, 00:20:31.721 { 00:20:31.721 "name": "BaseBdev2", 00:20:31.721 "uuid": "fdbef653-65f3-4f1f-8554-97ac4f04b087", 00:20:31.721 "is_configured": true, 00:20:31.721 "data_offset": 256, 00:20:31.721 "data_size": 7936 00:20:31.721 } 00:20:31.721 ] 00:20:31.721 }' 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.721 13:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.980 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:31.980 [2024-10-01 13:54:42.146028] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:31.980 [2024-10-01 13:54:42.146163] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.238 [2024-10-01 13:54:42.254343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.238 [2024-10-01 13:54:42.254392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.238 [2024-10-01 13:54:42.254420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:32.238 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.238 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87269 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87269 ']' 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87269 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87269 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.239 killing process with pid 87269 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87269' 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87269 00:20:32.239 [2024-10-01 13:54:42.347281] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.239 13:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87269 00:20:32.239 [2024-10-01 13:54:42.365288] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.615 13:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:33.615 00:20:33.615 real 0m5.248s 00:20:33.615 user 0m7.421s 00:20:33.615 sys 0m0.964s 00:20:33.615 13:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.615 13:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.615 ************************************ 00:20:33.615 END TEST raid_state_function_test_sb_md_separate 00:20:33.615 ************************************ 00:20:33.615 13:54:43 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:33.616 13:54:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:33.616 13:54:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.616 13:54:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.616 ************************************ 00:20:33.616 START TEST raid_superblock_test_md_separate 00:20:33.616 ************************************ 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87522 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87522 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87522 ']' 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.616 13:54:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.874 [2024-10-01 13:54:43.846524] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:33.874 [2024-10-01 13:54:43.846664] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87522 ] 00:20:33.875 [2024-10-01 13:54:44.004888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.133 [2024-10-01 13:54:44.233171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.392 [2024-10-01 13:54:44.453821] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.392 [2024-10-01 13:54:44.453892] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.651 malloc1 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.651 [2024-10-01 13:54:44.787322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:34.651 [2024-10-01 13:54:44.787386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.651 [2024-10-01 13:54:44.787426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:34.651 [2024-10-01 13:54:44.787440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.651 [2024-10-01 13:54:44.789695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.651 [2024-10-01 13:54:44.789735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:34.651 pt1 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.651 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.911 malloc2 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.911 [2024-10-01 13:54:44.859083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:34.911 [2024-10-01 13:54:44.859150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.911 [2024-10-01 13:54:44.859177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:34.911 [2024-10-01 13:54:44.859190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.911 [2024-10-01 13:54:44.861561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.911 [2024-10-01 13:54:44.861601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:34.911 pt2 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.911 [2024-10-01 13:54:44.871164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:34.911 [2024-10-01 13:54:44.873397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:34.911 [2024-10-01 13:54:44.873594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:34.911 [2024-10-01 13:54:44.873608] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:34.911 [2024-10-01 13:54:44.873700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:34.911 [2024-10-01 13:54:44.873823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:34.911 [2024-10-01 13:54:44.873835] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:34.911 [2024-10-01 13:54:44.873963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.911 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.912 "name": "raid_bdev1", 00:20:34.912 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:34.912 "strip_size_kb": 0, 00:20:34.912 "state": "online", 00:20:34.912 "raid_level": "raid1", 00:20:34.912 "superblock": true, 00:20:34.912 "num_base_bdevs": 2, 00:20:34.912 "num_base_bdevs_discovered": 2, 00:20:34.912 "num_base_bdevs_operational": 2, 00:20:34.912 "base_bdevs_list": [ 00:20:34.912 { 00:20:34.912 "name": "pt1", 00:20:34.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:34.912 "is_configured": true, 00:20:34.912 "data_offset": 256, 00:20:34.912 "data_size": 7936 00:20:34.912 }, 00:20:34.912 { 00:20:34.912 "name": "pt2", 00:20:34.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:34.912 "is_configured": true, 00:20:34.912 "data_offset": 256, 00:20:34.912 "data_size": 7936 00:20:34.912 } 00:20:34.912 ] 00:20:34.912 }' 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.912 13:54:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.170 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.170 [2024-10-01 13:54:45.338756] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.428 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.428 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.428 "name": "raid_bdev1", 00:20:35.428 "aliases": [ 00:20:35.428 "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69" 00:20:35.428 ], 00:20:35.428 "product_name": "Raid Volume", 00:20:35.428 "block_size": 4096, 00:20:35.428 "num_blocks": 7936, 00:20:35.428 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:35.428 "md_size": 32, 00:20:35.428 "md_interleave": false, 00:20:35.428 "dif_type": 0, 00:20:35.428 "assigned_rate_limits": { 00:20:35.428 "rw_ios_per_sec": 0, 00:20:35.428 "rw_mbytes_per_sec": 0, 00:20:35.428 "r_mbytes_per_sec": 0, 00:20:35.428 "w_mbytes_per_sec": 0 00:20:35.428 }, 00:20:35.428 "claimed": false, 00:20:35.428 "zoned": false, 00:20:35.428 "supported_io_types": { 00:20:35.429 "read": true, 00:20:35.429 "write": true, 00:20:35.429 "unmap": false, 00:20:35.429 "flush": false, 00:20:35.429 "reset": true, 00:20:35.429 "nvme_admin": false, 00:20:35.429 "nvme_io": false, 00:20:35.429 "nvme_io_md": false, 00:20:35.429 "write_zeroes": true, 00:20:35.429 "zcopy": false, 00:20:35.429 "get_zone_info": false, 00:20:35.429 "zone_management": false, 00:20:35.429 "zone_append": false, 00:20:35.429 "compare": false, 00:20:35.429 "compare_and_write": false, 00:20:35.429 "abort": false, 00:20:35.429 "seek_hole": false, 00:20:35.429 "seek_data": false, 00:20:35.429 "copy": false, 00:20:35.429 "nvme_iov_md": false 00:20:35.429 }, 00:20:35.429 "memory_domains": [ 00:20:35.429 { 00:20:35.429 "dma_device_id": "system", 00:20:35.429 "dma_device_type": 1 00:20:35.429 }, 00:20:35.429 { 00:20:35.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.429 "dma_device_type": 2 00:20:35.429 }, 00:20:35.429 { 00:20:35.429 "dma_device_id": "system", 00:20:35.429 "dma_device_type": 1 00:20:35.429 }, 00:20:35.429 { 00:20:35.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.429 "dma_device_type": 2 00:20:35.429 } 00:20:35.429 ], 00:20:35.429 "driver_specific": { 00:20:35.429 "raid": { 00:20:35.429 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:35.429 "strip_size_kb": 0, 00:20:35.429 "state": "online", 00:20:35.429 "raid_level": "raid1", 00:20:35.429 "superblock": true, 00:20:35.429 "num_base_bdevs": 2, 00:20:35.429 "num_base_bdevs_discovered": 2, 00:20:35.429 "num_base_bdevs_operational": 2, 00:20:35.429 "base_bdevs_list": [ 00:20:35.429 { 00:20:35.429 "name": "pt1", 00:20:35.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.429 "is_configured": true, 00:20:35.429 "data_offset": 256, 00:20:35.429 "data_size": 7936 00:20:35.429 }, 00:20:35.429 { 00:20:35.429 "name": "pt2", 00:20:35.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.429 "is_configured": true, 00:20:35.429 "data_offset": 256, 00:20:35.429 "data_size": 7936 00:20:35.429 } 00:20:35.429 ] 00:20:35.429 } 00:20:35.429 } 00:20:35.429 }' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:35.429 pt2' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.429 [2024-10-01 13:54:45.546422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69 ']' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.429 [2024-10-01 13:54:45.586096] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.429 [2024-10-01 13:54:45.586128] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:35.429 [2024-10-01 13:54:45.586230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.429 [2024-10-01 13:54:45.586295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.429 [2024-10-01 13:54:45.586311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:35.429 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.688 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.688 [2024-10-01 13:54:45.721979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:35.688 [2024-10-01 13:54:45.724183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:35.688 [2024-10-01 13:54:45.724296] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:35.688 [2024-10-01 13:54:45.724355] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:35.689 [2024-10-01 13:54:45.724374] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.689 [2024-10-01 13:54:45.724388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:35.689 request: 00:20:35.689 { 00:20:35.689 "name": "raid_bdev1", 00:20:35.689 "raid_level": "raid1", 00:20:35.689 "base_bdevs": [ 00:20:35.689 "malloc1", 00:20:35.689 "malloc2" 00:20:35.689 ], 00:20:35.689 "superblock": false, 00:20:35.689 "method": "bdev_raid_create", 00:20:35.689 "req_id": 1 00:20:35.689 } 00:20:35.689 Got JSON-RPC error response 00:20:35.689 response: 00:20:35.689 { 00:20:35.689 "code": -17, 00:20:35.689 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:35.689 } 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.689 [2024-10-01 13:54:45.785808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:35.689 [2024-10-01 13:54:45.785875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.689 [2024-10-01 13:54:45.785896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:35.689 [2024-10-01 13:54:45.785910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.689 [2024-10-01 13:54:45.788289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.689 [2024-10-01 13:54:45.788334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:35.689 [2024-10-01 13:54:45.788391] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:35.689 [2024-10-01 13:54:45.788469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:35.689 pt1 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.689 "name": "raid_bdev1", 00:20:35.689 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:35.689 "strip_size_kb": 0, 00:20:35.689 "state": "configuring", 00:20:35.689 "raid_level": "raid1", 00:20:35.689 "superblock": true, 00:20:35.689 "num_base_bdevs": 2, 00:20:35.689 "num_base_bdevs_discovered": 1, 00:20:35.689 "num_base_bdevs_operational": 2, 00:20:35.689 "base_bdevs_list": [ 00:20:35.689 { 00:20:35.689 "name": "pt1", 00:20:35.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.689 "is_configured": true, 00:20:35.689 "data_offset": 256, 00:20:35.689 "data_size": 7936 00:20:35.689 }, 00:20:35.689 { 00:20:35.689 "name": null, 00:20:35.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.689 "is_configured": false, 00:20:35.689 "data_offset": 256, 00:20:35.689 "data_size": 7936 00:20:35.689 } 00:20:35.689 ] 00:20:35.689 }' 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.689 13:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.257 [2024-10-01 13:54:46.245192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:36.257 [2024-10-01 13:54:46.245272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.257 [2024-10-01 13:54:46.245297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:36.257 [2024-10-01 13:54:46.245312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.257 [2024-10-01 13:54:46.245568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.257 [2024-10-01 13:54:46.245589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:36.257 [2024-10-01 13:54:46.245646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:36.257 [2024-10-01 13:54:46.245671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:36.257 [2024-10-01 13:54:46.245806] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:36.257 [2024-10-01 13:54:46.245833] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:36.257 [2024-10-01 13:54:46.245904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:36.257 [2024-10-01 13:54:46.246015] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:36.257 [2024-10-01 13:54:46.246025] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:36.257 [2024-10-01 13:54:46.246128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.257 pt2 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.257 "name": "raid_bdev1", 00:20:36.257 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:36.257 "strip_size_kb": 0, 00:20:36.257 "state": "online", 00:20:36.257 "raid_level": "raid1", 00:20:36.257 "superblock": true, 00:20:36.257 "num_base_bdevs": 2, 00:20:36.257 "num_base_bdevs_discovered": 2, 00:20:36.257 "num_base_bdevs_operational": 2, 00:20:36.257 "base_bdevs_list": [ 00:20:36.257 { 00:20:36.257 "name": "pt1", 00:20:36.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.257 "is_configured": true, 00:20:36.257 "data_offset": 256, 00:20:36.257 "data_size": 7936 00:20:36.257 }, 00:20:36.257 { 00:20:36.257 "name": "pt2", 00:20:36.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.257 "is_configured": true, 00:20:36.257 "data_offset": 256, 00:20:36.257 "data_size": 7936 00:20:36.257 } 00:20:36.257 ] 00:20:36.257 }' 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.257 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.516 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.516 [2024-10-01 13:54:46.688873] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.774 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.774 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:36.774 "name": "raid_bdev1", 00:20:36.774 "aliases": [ 00:20:36.774 "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69" 00:20:36.774 ], 00:20:36.774 "product_name": "Raid Volume", 00:20:36.774 "block_size": 4096, 00:20:36.774 "num_blocks": 7936, 00:20:36.774 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:36.774 "md_size": 32, 00:20:36.774 "md_interleave": false, 00:20:36.774 "dif_type": 0, 00:20:36.774 "assigned_rate_limits": { 00:20:36.774 "rw_ios_per_sec": 0, 00:20:36.774 "rw_mbytes_per_sec": 0, 00:20:36.774 "r_mbytes_per_sec": 0, 00:20:36.774 "w_mbytes_per_sec": 0 00:20:36.774 }, 00:20:36.774 "claimed": false, 00:20:36.774 "zoned": false, 00:20:36.774 "supported_io_types": { 00:20:36.774 "read": true, 00:20:36.774 "write": true, 00:20:36.774 "unmap": false, 00:20:36.774 "flush": false, 00:20:36.774 "reset": true, 00:20:36.774 "nvme_admin": false, 00:20:36.774 "nvme_io": false, 00:20:36.774 "nvme_io_md": false, 00:20:36.774 "write_zeroes": true, 00:20:36.774 "zcopy": false, 00:20:36.775 "get_zone_info": false, 00:20:36.775 "zone_management": false, 00:20:36.775 "zone_append": false, 00:20:36.775 "compare": false, 00:20:36.775 "compare_and_write": false, 00:20:36.775 "abort": false, 00:20:36.775 "seek_hole": false, 00:20:36.775 "seek_data": false, 00:20:36.775 "copy": false, 00:20:36.775 "nvme_iov_md": false 00:20:36.775 }, 00:20:36.775 "memory_domains": [ 00:20:36.775 { 00:20:36.775 "dma_device_id": "system", 00:20:36.775 "dma_device_type": 1 00:20:36.775 }, 00:20:36.775 { 00:20:36.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.775 "dma_device_type": 2 00:20:36.775 }, 00:20:36.775 { 00:20:36.775 "dma_device_id": "system", 00:20:36.775 "dma_device_type": 1 00:20:36.775 }, 00:20:36.775 { 00:20:36.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.775 "dma_device_type": 2 00:20:36.775 } 00:20:36.775 ], 00:20:36.775 "driver_specific": { 00:20:36.775 "raid": { 00:20:36.775 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:36.775 "strip_size_kb": 0, 00:20:36.775 "state": "online", 00:20:36.775 "raid_level": "raid1", 00:20:36.775 "superblock": true, 00:20:36.775 "num_base_bdevs": 2, 00:20:36.775 "num_base_bdevs_discovered": 2, 00:20:36.775 "num_base_bdevs_operational": 2, 00:20:36.775 "base_bdevs_list": [ 00:20:36.775 { 00:20:36.775 "name": "pt1", 00:20:36.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.775 "is_configured": true, 00:20:36.775 "data_offset": 256, 00:20:36.775 "data_size": 7936 00:20:36.775 }, 00:20:36.775 { 00:20:36.775 "name": "pt2", 00:20:36.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.775 "is_configured": true, 00:20:36.775 "data_offset": 256, 00:20:36.775 "data_size": 7936 00:20:36.775 } 00:20:36.775 ] 00:20:36.775 } 00:20:36.775 } 00:20:36.775 }' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:36.775 pt2' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:36.775 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.775 [2024-10-01 13:54:46.944535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69 '!=' b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69 ']' 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.034 [2024-10-01 13:54:46.992259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.034 13:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.034 "name": "raid_bdev1", 00:20:37.034 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:37.034 "strip_size_kb": 0, 00:20:37.034 "state": "online", 00:20:37.034 "raid_level": "raid1", 00:20:37.034 "superblock": true, 00:20:37.034 "num_base_bdevs": 2, 00:20:37.034 "num_base_bdevs_discovered": 1, 00:20:37.034 "num_base_bdevs_operational": 1, 00:20:37.034 "base_bdevs_list": [ 00:20:37.034 { 00:20:37.034 "name": null, 00:20:37.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.034 "is_configured": false, 00:20:37.034 "data_offset": 0, 00:20:37.034 "data_size": 7936 00:20:37.034 }, 00:20:37.034 { 00:20:37.034 "name": "pt2", 00:20:37.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.034 "is_configured": true, 00:20:37.034 "data_offset": 256, 00:20:37.034 "data_size": 7936 00:20:37.034 } 00:20:37.034 ] 00:20:37.034 }' 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.034 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.293 [2024-10-01 13:54:47.443650] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.293 [2024-10-01 13:54:47.443686] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.293 [2024-10-01 13:54:47.443772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.293 [2024-10-01 13:54:47.443821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.293 [2024-10-01 13:54:47.443837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.293 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 [2024-10-01 13:54:47.511637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.551 [2024-10-01 13:54:47.511706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.551 [2024-10-01 13:54:47.511726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:37.551 [2024-10-01 13:54:47.511742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.551 [2024-10-01 13:54:47.514023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.551 [2024-10-01 13:54:47.514067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.551 [2024-10-01 13:54:47.514125] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.551 [2024-10-01 13:54:47.514174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.551 [2024-10-01 13:54:47.514278] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:37.551 [2024-10-01 13:54:47.514293] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:37.551 [2024-10-01 13:54:47.514364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:37.551 [2024-10-01 13:54:47.514493] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:37.551 [2024-10-01 13:54:47.514503] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:37.551 [2024-10-01 13:54:47.514608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.551 pt2 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.551 "name": "raid_bdev1", 00:20:37.551 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:37.551 "strip_size_kb": 0, 00:20:37.551 "state": "online", 00:20:37.551 "raid_level": "raid1", 00:20:37.551 "superblock": true, 00:20:37.551 "num_base_bdevs": 2, 00:20:37.551 "num_base_bdevs_discovered": 1, 00:20:37.551 "num_base_bdevs_operational": 1, 00:20:37.551 "base_bdevs_list": [ 00:20:37.551 { 00:20:37.551 "name": null, 00:20:37.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.551 "is_configured": false, 00:20:37.551 "data_offset": 256, 00:20:37.551 "data_size": 7936 00:20:37.551 }, 00:20:37.551 { 00:20:37.551 "name": "pt2", 00:20:37.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.551 "is_configured": true, 00:20:37.551 "data_offset": 256, 00:20:37.551 "data_size": 7936 00:20:37.551 } 00:20:37.551 ] 00:20:37.551 }' 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.551 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.811 [2024-10-01 13:54:47.935613] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.811 [2024-10-01 13:54:47.935652] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.811 [2024-10-01 13:54:47.935731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.811 [2024-10-01 13:54:47.935786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.811 [2024-10-01 13:54:47.935798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.811 [2024-10-01 13:54:47.991620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:37.811 [2024-10-01 13:54:47.991688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.811 [2024-10-01 13:54:47.991712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:37.811 [2024-10-01 13:54:47.991724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.811 [2024-10-01 13:54:47.994055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.811 [2024-10-01 13:54:47.994094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:37.811 [2024-10-01 13:54:47.994157] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:37.811 [2024-10-01 13:54:47.994203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:37.811 [2024-10-01 13:54:47.994334] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:37.811 [2024-10-01 13:54:47.994347] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.811 [2024-10-01 13:54:47.994371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:37.811 [2024-10-01 13:54:47.994476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.811 [2024-10-01 13:54:47.994551] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:37.811 [2024-10-01 13:54:47.994561] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:37.811 [2024-10-01 13:54:47.994635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:37.811 [2024-10-01 13:54:47.994748] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:37.811 [2024-10-01 13:54:47.994760] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:37.811 [2024-10-01 13:54:47.994866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.811 pt1 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.811 13:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.811 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.811 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.811 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.811 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.071 "name": "raid_bdev1", 00:20:38.071 "uuid": "b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69", 00:20:38.071 "strip_size_kb": 0, 00:20:38.071 "state": "online", 00:20:38.071 "raid_level": "raid1", 00:20:38.071 "superblock": true, 00:20:38.071 "num_base_bdevs": 2, 00:20:38.071 "num_base_bdevs_discovered": 1, 00:20:38.071 "num_base_bdevs_operational": 1, 00:20:38.071 "base_bdevs_list": [ 00:20:38.071 { 00:20:38.071 "name": null, 00:20:38.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.071 "is_configured": false, 00:20:38.071 "data_offset": 256, 00:20:38.071 "data_size": 7936 00:20:38.071 }, 00:20:38.071 { 00:20:38.071 "name": "pt2", 00:20:38.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.071 "is_configured": true, 00:20:38.071 "data_offset": 256, 00:20:38.071 "data_size": 7936 00:20:38.071 } 00:20:38.071 ] 00:20:38.071 }' 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.071 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.330 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.330 [2024-10-01 13:54:48.495872] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69 '!=' b46ccbdf-6d62-4c8d-b0b8-227ccfeadd69 ']' 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87522 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87522 ']' 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87522 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87522 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87522' 00:20:38.591 killing process with pid 87522 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87522 00:20:38.591 [2024-10-01 13:54:48.576730] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:38.591 [2024-10-01 13:54:48.576841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.591 13:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87522 00:20:38.591 [2024-10-01 13:54:48.576896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.591 [2024-10-01 13:54:48.576915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:38.851 [2024-10-01 13:54:48.810124] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.229 13:54:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:40.230 00:20:40.230 real 0m6.388s 00:20:40.230 user 0m9.534s 00:20:40.230 sys 0m1.241s 00:20:40.230 13:54:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:40.230 13:54:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 ************************************ 00:20:40.230 END TEST raid_superblock_test_md_separate 00:20:40.230 ************************************ 00:20:40.230 13:54:50 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:40.230 13:54:50 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:40.230 13:54:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:40.230 13:54:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:40.230 13:54:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 ************************************ 00:20:40.230 START TEST raid_rebuild_test_sb_md_separate 00:20:40.230 ************************************ 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87845 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87845 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87845 ']' 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 13:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:40.230 [2024-10-01 13:54:50.313541] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:40.230 [2024-10-01 13:54:50.313676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87845 ] 00:20:40.230 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.230 Zero copy mechanism will not be used. 00:20:40.489 [2024-10-01 13:54:50.487441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.748 [2024-10-01 13:54:50.707962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.748 [2024-10-01 13:54:50.920793] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.748 [2024-10-01 13:54:50.920861] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.007 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:41.007 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:20:41.007 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:41.007 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:41.007 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.007 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 BaseBdev1_malloc 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 [2024-10-01 13:54:51.240185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:41.267 [2024-10-01 13:54:51.240252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.267 [2024-10-01 13:54:51.240283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:41.267 [2024-10-01 13:54:51.240299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.267 [2024-10-01 13:54:51.242656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.267 [2024-10-01 13:54:51.242703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:41.267 BaseBdev1 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 BaseBdev2_malloc 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 [2024-10-01 13:54:51.313982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:41.267 [2024-10-01 13:54:51.314049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.267 [2024-10-01 13:54:51.314073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:41.267 [2024-10-01 13:54:51.314087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.267 [2024-10-01 13:54:51.316524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.267 [2024-10-01 13:54:51.316568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:41.267 BaseBdev2 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 spare_malloc 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 spare_delay 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 [2024-10-01 13:54:51.386647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:41.267 [2024-10-01 13:54:51.386716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.267 [2024-10-01 13:54:51.386739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:41.267 [2024-10-01 13:54:51.386754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.267 [2024-10-01 13:54:51.389071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.267 [2024-10-01 13:54:51.389117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:41.267 spare 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 [2024-10-01 13:54:51.398689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.267 [2024-10-01 13:54:51.400931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.267 [2024-10-01 13:54:51.401120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:41.267 [2024-10-01 13:54:51.401138] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:41.267 [2024-10-01 13:54:51.401227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:41.267 [2024-10-01 13:54:51.401365] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:41.267 [2024-10-01 13:54:51.401387] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:41.267 [2024-10-01 13:54:51.401535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.267 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.267 "name": "raid_bdev1", 00:20:41.267 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:41.267 "strip_size_kb": 0, 00:20:41.267 "state": "online", 00:20:41.267 "raid_level": "raid1", 00:20:41.267 "superblock": true, 00:20:41.267 "num_base_bdevs": 2, 00:20:41.267 "num_base_bdevs_discovered": 2, 00:20:41.267 "num_base_bdevs_operational": 2, 00:20:41.267 "base_bdevs_list": [ 00:20:41.267 { 00:20:41.267 "name": "BaseBdev1", 00:20:41.267 "uuid": "b4c36a12-34f6-52f1-ada1-98d4d4b732cb", 00:20:41.267 "is_configured": true, 00:20:41.267 "data_offset": 256, 00:20:41.267 "data_size": 7936 00:20:41.267 }, 00:20:41.267 { 00:20:41.267 "name": "BaseBdev2", 00:20:41.267 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:41.267 "is_configured": true, 00:20:41.267 "data_offset": 256, 00:20:41.267 "data_size": 7936 00:20:41.267 } 00:20:41.267 ] 00:20:41.268 }' 00:20:41.268 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.268 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.836 [2024-10-01 13:54:51.886349] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:41.836 13:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:42.096 [2024-10-01 13:54:52.173738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:42.096 /dev/nbd0 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.096 1+0 records in 00:20:42.096 1+0 records out 00:20:42.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474006 s, 8.6 MB/s 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:42.096 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:43.051 7936+0 records in 00:20:43.051 7936+0 records out 00:20:43.051 32505856 bytes (33 MB, 31 MiB) copied, 0.740204 s, 43.9 MB/s 00:20:43.051 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:43.051 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.051 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:43.051 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.051 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:43.051 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.051 13:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:43.051 [2024-10-01 13:54:53.214437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.051 [2024-10-01 13:54:53.231169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.051 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.310 "name": "raid_bdev1", 00:20:43.310 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:43.310 "strip_size_kb": 0, 00:20:43.310 "state": "online", 00:20:43.310 "raid_level": "raid1", 00:20:43.310 "superblock": true, 00:20:43.310 "num_base_bdevs": 2, 00:20:43.310 "num_base_bdevs_discovered": 1, 00:20:43.310 "num_base_bdevs_operational": 1, 00:20:43.310 "base_bdevs_list": [ 00:20:43.310 { 00:20:43.310 "name": null, 00:20:43.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.310 "is_configured": false, 00:20:43.310 "data_offset": 0, 00:20:43.310 "data_size": 7936 00:20:43.310 }, 00:20:43.310 { 00:20:43.310 "name": "BaseBdev2", 00:20:43.310 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:43.310 "is_configured": true, 00:20:43.310 "data_offset": 256, 00:20:43.310 "data_size": 7936 00:20:43.310 } 00:20:43.310 ] 00:20:43.310 }' 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.310 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.569 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:43.569 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.569 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.569 [2024-10-01 13:54:53.734649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.569 [2024-10-01 13:54:53.751373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:43.569 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.569 13:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:43.569 [2024-10-01 13:54:53.753659] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.946 "name": "raid_bdev1", 00:20:44.946 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:44.946 "strip_size_kb": 0, 00:20:44.946 "state": "online", 00:20:44.946 "raid_level": "raid1", 00:20:44.946 "superblock": true, 00:20:44.946 "num_base_bdevs": 2, 00:20:44.946 "num_base_bdevs_discovered": 2, 00:20:44.946 "num_base_bdevs_operational": 2, 00:20:44.946 "process": { 00:20:44.946 "type": "rebuild", 00:20:44.946 "target": "spare", 00:20:44.946 "progress": { 00:20:44.946 "blocks": 2560, 00:20:44.946 "percent": 32 00:20:44.946 } 00:20:44.946 }, 00:20:44.946 "base_bdevs_list": [ 00:20:44.946 { 00:20:44.946 "name": "spare", 00:20:44.946 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:44.946 "is_configured": true, 00:20:44.946 "data_offset": 256, 00:20:44.946 "data_size": 7936 00:20:44.946 }, 00:20:44.946 { 00:20:44.946 "name": "BaseBdev2", 00:20:44.946 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:44.946 "is_configured": true, 00:20:44.946 "data_offset": 256, 00:20:44.946 "data_size": 7936 00:20:44.946 } 00:20:44.946 ] 00:20:44.946 }' 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.946 [2024-10-01 13:54:54.901554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.946 [2024-10-01 13:54:54.959974] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:44.946 [2024-10-01 13:54:54.960052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.946 [2024-10-01 13:54:54.960071] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.946 [2024-10-01 13:54:54.960084] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.946 13:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.946 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.946 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.946 "name": "raid_bdev1", 00:20:44.946 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:44.946 "strip_size_kb": 0, 00:20:44.946 "state": "online", 00:20:44.946 "raid_level": "raid1", 00:20:44.946 "superblock": true, 00:20:44.946 "num_base_bdevs": 2, 00:20:44.946 "num_base_bdevs_discovered": 1, 00:20:44.946 "num_base_bdevs_operational": 1, 00:20:44.946 "base_bdevs_list": [ 00:20:44.946 { 00:20:44.946 "name": null, 00:20:44.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.946 "is_configured": false, 00:20:44.946 "data_offset": 0, 00:20:44.946 "data_size": 7936 00:20:44.946 }, 00:20:44.946 { 00:20:44.946 "name": "BaseBdev2", 00:20:44.946 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:44.946 "is_configured": true, 00:20:44.946 "data_offset": 256, 00:20:44.946 "data_size": 7936 00:20:44.946 } 00:20:44.946 ] 00:20:44.946 }' 00:20:44.946 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.946 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.514 "name": "raid_bdev1", 00:20:45.514 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:45.514 "strip_size_kb": 0, 00:20:45.514 "state": "online", 00:20:45.514 "raid_level": "raid1", 00:20:45.514 "superblock": true, 00:20:45.514 "num_base_bdevs": 2, 00:20:45.514 "num_base_bdevs_discovered": 1, 00:20:45.514 "num_base_bdevs_operational": 1, 00:20:45.514 "base_bdevs_list": [ 00:20:45.514 { 00:20:45.514 "name": null, 00:20:45.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.514 "is_configured": false, 00:20:45.514 "data_offset": 0, 00:20:45.514 "data_size": 7936 00:20:45.514 }, 00:20:45.514 { 00:20:45.514 "name": "BaseBdev2", 00:20:45.514 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:45.514 "is_configured": true, 00:20:45.514 "data_offset": 256, 00:20:45.514 "data_size": 7936 00:20:45.514 } 00:20:45.514 ] 00:20:45.514 }' 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.514 [2024-10-01 13:54:55.557393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.514 [2024-10-01 13:54:55.573164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.514 13:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:45.514 [2024-10-01 13:54:55.575616] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.454 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.454 "name": "raid_bdev1", 00:20:46.454 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:46.454 "strip_size_kb": 0, 00:20:46.454 "state": "online", 00:20:46.454 "raid_level": "raid1", 00:20:46.454 "superblock": true, 00:20:46.454 "num_base_bdevs": 2, 00:20:46.454 "num_base_bdevs_discovered": 2, 00:20:46.454 "num_base_bdevs_operational": 2, 00:20:46.454 "process": { 00:20:46.454 "type": "rebuild", 00:20:46.454 "target": "spare", 00:20:46.454 "progress": { 00:20:46.454 "blocks": 2560, 00:20:46.454 "percent": 32 00:20:46.454 } 00:20:46.454 }, 00:20:46.454 "base_bdevs_list": [ 00:20:46.454 { 00:20:46.454 "name": "spare", 00:20:46.454 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:46.454 "is_configured": true, 00:20:46.454 "data_offset": 256, 00:20:46.455 "data_size": 7936 00:20:46.455 }, 00:20:46.455 { 00:20:46.455 "name": "BaseBdev2", 00:20:46.455 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:46.455 "is_configured": true, 00:20:46.455 "data_offset": 256, 00:20:46.455 "data_size": 7936 00:20:46.455 } 00:20:46.455 ] 00:20:46.455 }' 00:20:46.455 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:46.712 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:46.712 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=731 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.713 "name": "raid_bdev1", 00:20:46.713 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:46.713 "strip_size_kb": 0, 00:20:46.713 "state": "online", 00:20:46.713 "raid_level": "raid1", 00:20:46.713 "superblock": true, 00:20:46.713 "num_base_bdevs": 2, 00:20:46.713 "num_base_bdevs_discovered": 2, 00:20:46.713 "num_base_bdevs_operational": 2, 00:20:46.713 "process": { 00:20:46.713 "type": "rebuild", 00:20:46.713 "target": "spare", 00:20:46.713 "progress": { 00:20:46.713 "blocks": 2816, 00:20:46.713 "percent": 35 00:20:46.713 } 00:20:46.713 }, 00:20:46.713 "base_bdevs_list": [ 00:20:46.713 { 00:20:46.713 "name": "spare", 00:20:46.713 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:46.713 "is_configured": true, 00:20:46.713 "data_offset": 256, 00:20:46.713 "data_size": 7936 00:20:46.713 }, 00:20:46.713 { 00:20:46.713 "name": "BaseBdev2", 00:20:46.713 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:46.713 "is_configured": true, 00:20:46.713 "data_offset": 256, 00:20:46.713 "data_size": 7936 00:20:46.713 } 00:20:46.713 ] 00:20:46.713 }' 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.713 13:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.089 "name": "raid_bdev1", 00:20:48.089 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:48.089 "strip_size_kb": 0, 00:20:48.089 "state": "online", 00:20:48.089 "raid_level": "raid1", 00:20:48.089 "superblock": true, 00:20:48.089 "num_base_bdevs": 2, 00:20:48.089 "num_base_bdevs_discovered": 2, 00:20:48.089 "num_base_bdevs_operational": 2, 00:20:48.089 "process": { 00:20:48.089 "type": "rebuild", 00:20:48.089 "target": "spare", 00:20:48.089 "progress": { 00:20:48.089 "blocks": 5888, 00:20:48.089 "percent": 74 00:20:48.089 } 00:20:48.089 }, 00:20:48.089 "base_bdevs_list": [ 00:20:48.089 { 00:20:48.089 "name": "spare", 00:20:48.089 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:48.089 "is_configured": true, 00:20:48.089 "data_offset": 256, 00:20:48.089 "data_size": 7936 00:20:48.089 }, 00:20:48.089 { 00:20:48.089 "name": "BaseBdev2", 00:20:48.089 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:48.089 "is_configured": true, 00:20:48.089 "data_offset": 256, 00:20:48.089 "data_size": 7936 00:20:48.089 } 00:20:48.089 ] 00:20:48.089 }' 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.089 13:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.089 13:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.089 13:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:48.656 [2024-10-01 13:54:58.690960] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:48.656 [2024-10-01 13:54:58.691055] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:48.656 [2024-10-01 13:54:58.691190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.916 "name": "raid_bdev1", 00:20:48.916 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:48.916 "strip_size_kb": 0, 00:20:48.916 "state": "online", 00:20:48.916 "raid_level": "raid1", 00:20:48.916 "superblock": true, 00:20:48.916 "num_base_bdevs": 2, 00:20:48.916 "num_base_bdevs_discovered": 2, 00:20:48.916 "num_base_bdevs_operational": 2, 00:20:48.916 "base_bdevs_list": [ 00:20:48.916 { 00:20:48.916 "name": "spare", 00:20:48.916 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:48.916 "is_configured": true, 00:20:48.916 "data_offset": 256, 00:20:48.916 "data_size": 7936 00:20:48.916 }, 00:20:48.916 { 00:20:48.916 "name": "BaseBdev2", 00:20:48.916 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:48.916 "is_configured": true, 00:20:48.916 "data_offset": 256, 00:20:48.916 "data_size": 7936 00:20:48.916 } 00:20:48.916 ] 00:20:48.916 }' 00:20:48.916 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.174 "name": "raid_bdev1", 00:20:49.174 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:49.174 "strip_size_kb": 0, 00:20:49.174 "state": "online", 00:20:49.174 "raid_level": "raid1", 00:20:49.174 "superblock": true, 00:20:49.174 "num_base_bdevs": 2, 00:20:49.174 "num_base_bdevs_discovered": 2, 00:20:49.174 "num_base_bdevs_operational": 2, 00:20:49.174 "base_bdevs_list": [ 00:20:49.174 { 00:20:49.174 "name": "spare", 00:20:49.174 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:49.174 "is_configured": true, 00:20:49.174 "data_offset": 256, 00:20:49.174 "data_size": 7936 00:20:49.174 }, 00:20:49.174 { 00:20:49.174 "name": "BaseBdev2", 00:20:49.174 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:49.174 "is_configured": true, 00:20:49.174 "data_offset": 256, 00:20:49.174 "data_size": 7936 00:20:49.174 } 00:20:49.174 ] 00:20:49.174 }' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.174 "name": "raid_bdev1", 00:20:49.174 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:49.174 "strip_size_kb": 0, 00:20:49.174 "state": "online", 00:20:49.174 "raid_level": "raid1", 00:20:49.174 "superblock": true, 00:20:49.174 "num_base_bdevs": 2, 00:20:49.174 "num_base_bdevs_discovered": 2, 00:20:49.174 "num_base_bdevs_operational": 2, 00:20:49.174 "base_bdevs_list": [ 00:20:49.174 { 00:20:49.174 "name": "spare", 00:20:49.174 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:49.174 "is_configured": true, 00:20:49.174 "data_offset": 256, 00:20:49.174 "data_size": 7936 00:20:49.174 }, 00:20:49.174 { 00:20:49.174 "name": "BaseBdev2", 00:20:49.174 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:49.174 "is_configured": true, 00:20:49.174 "data_offset": 256, 00:20:49.174 "data_size": 7936 00:20:49.174 } 00:20:49.174 ] 00:20:49.174 }' 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.174 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.742 [2024-10-01 13:54:59.755924] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:49.742 [2024-10-01 13:54:59.755961] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:49.742 [2024-10-01 13:54:59.756053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:49.742 [2024-10-01 13:54:59.756128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:49.742 [2024-10-01 13:54:59.756141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:49.742 13:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:50.002 /dev/nbd0 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.002 1+0 records in 00:20:50.002 1+0 records out 00:20:50.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442999 s, 9.2 MB/s 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:50.002 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:50.262 /dev/nbd1 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.262 1+0 records in 00:20:50.262 1+0 records out 00:20:50.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600367 s, 6.8 MB/s 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:50.262 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:50.520 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:50.520 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:50.520 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:50.520 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:50.520 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:50.520 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.520 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.778 13:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.037 [2024-10-01 13:55:01.104238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:51.037 [2024-10-01 13:55:01.104305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.037 [2024-10-01 13:55:01.104333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:51.037 [2024-10-01 13:55:01.104346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.037 [2024-10-01 13:55:01.106793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.037 [2024-10-01 13:55:01.106837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:51.037 [2024-10-01 13:55:01.106915] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:51.037 [2024-10-01 13:55:01.106974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.037 [2024-10-01 13:55:01.107120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:51.037 spare 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.037 [2024-10-01 13:55:01.207072] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:51.037 [2024-10-01 13:55:01.207150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:51.037 [2024-10-01 13:55:01.207298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:51.037 [2024-10-01 13:55:01.207506] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:51.037 [2024-10-01 13:55:01.207519] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:51.037 [2024-10-01 13:55:01.207670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.037 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.295 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.295 "name": "raid_bdev1", 00:20:51.295 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:51.295 "strip_size_kb": 0, 00:20:51.295 "state": "online", 00:20:51.295 "raid_level": "raid1", 00:20:51.295 "superblock": true, 00:20:51.295 "num_base_bdevs": 2, 00:20:51.295 "num_base_bdevs_discovered": 2, 00:20:51.295 "num_base_bdevs_operational": 2, 00:20:51.295 "base_bdevs_list": [ 00:20:51.295 { 00:20:51.295 "name": "spare", 00:20:51.295 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:51.295 "is_configured": true, 00:20:51.295 "data_offset": 256, 00:20:51.295 "data_size": 7936 00:20:51.295 }, 00:20:51.295 { 00:20:51.295 "name": "BaseBdev2", 00:20:51.296 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:51.296 "is_configured": true, 00:20:51.296 "data_offset": 256, 00:20:51.296 "data_size": 7936 00:20:51.296 } 00:20:51.296 ] 00:20:51.296 }' 00:20:51.296 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.296 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.554 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.812 "name": "raid_bdev1", 00:20:51.812 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:51.812 "strip_size_kb": 0, 00:20:51.812 "state": "online", 00:20:51.812 "raid_level": "raid1", 00:20:51.812 "superblock": true, 00:20:51.812 "num_base_bdevs": 2, 00:20:51.812 "num_base_bdevs_discovered": 2, 00:20:51.812 "num_base_bdevs_operational": 2, 00:20:51.812 "base_bdevs_list": [ 00:20:51.812 { 00:20:51.812 "name": "spare", 00:20:51.812 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:51.812 "is_configured": true, 00:20:51.812 "data_offset": 256, 00:20:51.812 "data_size": 7936 00:20:51.812 }, 00:20:51.812 { 00:20:51.812 "name": "BaseBdev2", 00:20:51.812 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:51.812 "is_configured": true, 00:20:51.812 "data_offset": 256, 00:20:51.812 "data_size": 7936 00:20:51.812 } 00:20:51.812 ] 00:20:51.812 }' 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.812 [2024-10-01 13:55:01.895643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.812 "name": "raid_bdev1", 00:20:51.812 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:51.812 "strip_size_kb": 0, 00:20:51.812 "state": "online", 00:20:51.812 "raid_level": "raid1", 00:20:51.812 "superblock": true, 00:20:51.812 "num_base_bdevs": 2, 00:20:51.812 "num_base_bdevs_discovered": 1, 00:20:51.812 "num_base_bdevs_operational": 1, 00:20:51.812 "base_bdevs_list": [ 00:20:51.812 { 00:20:51.812 "name": null, 00:20:51.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.812 "is_configured": false, 00:20:51.812 "data_offset": 0, 00:20:51.812 "data_size": 7936 00:20:51.812 }, 00:20:51.812 { 00:20:51.812 "name": "BaseBdev2", 00:20:51.812 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:51.812 "is_configured": true, 00:20:51.812 "data_offset": 256, 00:20:51.812 "data_size": 7936 00:20:51.812 } 00:20:51.812 ] 00:20:51.812 }' 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.812 13:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.377 13:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:52.377 13:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.377 13:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:52.377 [2024-10-01 13:55:02.335688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:52.377 [2024-10-01 13:55:02.335899] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:52.377 [2024-10-01 13:55:02.335927] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:52.377 [2024-10-01 13:55:02.335979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:52.377 [2024-10-01 13:55:02.351457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:52.377 13:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.377 13:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:52.377 [2024-10-01 13:55:02.353765] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.312 "name": "raid_bdev1", 00:20:53.312 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:53.312 "strip_size_kb": 0, 00:20:53.312 "state": "online", 00:20:53.312 "raid_level": "raid1", 00:20:53.312 "superblock": true, 00:20:53.312 "num_base_bdevs": 2, 00:20:53.312 "num_base_bdevs_discovered": 2, 00:20:53.312 "num_base_bdevs_operational": 2, 00:20:53.312 "process": { 00:20:53.312 "type": "rebuild", 00:20:53.312 "target": "spare", 00:20:53.312 "progress": { 00:20:53.312 "blocks": 2560, 00:20:53.312 "percent": 32 00:20:53.312 } 00:20:53.312 }, 00:20:53.312 "base_bdevs_list": [ 00:20:53.312 { 00:20:53.312 "name": "spare", 00:20:53.312 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:53.312 "is_configured": true, 00:20:53.312 "data_offset": 256, 00:20:53.312 "data_size": 7936 00:20:53.312 }, 00:20:53.312 { 00:20:53.312 "name": "BaseBdev2", 00:20:53.312 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:53.312 "is_configured": true, 00:20:53.312 "data_offset": 256, 00:20:53.312 "data_size": 7936 00:20:53.312 } 00:20:53.312 ] 00:20:53.312 }' 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.312 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.597 [2024-10-01 13:55:03.517615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:53.597 [2024-10-01 13:55:03.559845] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:53.597 [2024-10-01 13:55:03.559972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.597 [2024-10-01 13:55:03.559990] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:53.597 [2024-10-01 13:55:03.560002] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.597 "name": "raid_bdev1", 00:20:53.597 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:53.597 "strip_size_kb": 0, 00:20:53.597 "state": "online", 00:20:53.597 "raid_level": "raid1", 00:20:53.597 "superblock": true, 00:20:53.597 "num_base_bdevs": 2, 00:20:53.597 "num_base_bdevs_discovered": 1, 00:20:53.597 "num_base_bdevs_operational": 1, 00:20:53.597 "base_bdevs_list": [ 00:20:53.597 { 00:20:53.597 "name": null, 00:20:53.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.597 "is_configured": false, 00:20:53.597 "data_offset": 0, 00:20:53.597 "data_size": 7936 00:20:53.597 }, 00:20:53.597 { 00:20:53.597 "name": "BaseBdev2", 00:20:53.597 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:53.597 "is_configured": true, 00:20:53.597 "data_offset": 256, 00:20:53.597 "data_size": 7936 00:20:53.597 } 00:20:53.597 ] 00:20:53.597 }' 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.597 13:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.856 13:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:53.856 13:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.856 13:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.856 [2024-10-01 13:55:04.028993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:53.856 [2024-10-01 13:55:04.029070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.856 [2024-10-01 13:55:04.029098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:53.856 [2024-10-01 13:55:04.029113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.856 [2024-10-01 13:55:04.029376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.856 [2024-10-01 13:55:04.029414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:53.856 [2024-10-01 13:55:04.029482] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:53.856 [2024-10-01 13:55:04.029499] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:53.856 [2024-10-01 13:55:04.029511] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:53.856 [2024-10-01 13:55:04.029549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:53.856 [2024-10-01 13:55:04.043718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:53.856 spare 00:20:53.856 13:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.856 13:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:53.856 [2024-10-01 13:55:04.045970] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.234 "name": "raid_bdev1", 00:20:55.234 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:55.234 "strip_size_kb": 0, 00:20:55.234 "state": "online", 00:20:55.234 "raid_level": "raid1", 00:20:55.234 "superblock": true, 00:20:55.234 "num_base_bdevs": 2, 00:20:55.234 "num_base_bdevs_discovered": 2, 00:20:55.234 "num_base_bdevs_operational": 2, 00:20:55.234 "process": { 00:20:55.234 "type": "rebuild", 00:20:55.234 "target": "spare", 00:20:55.234 "progress": { 00:20:55.234 "blocks": 2560, 00:20:55.234 "percent": 32 00:20:55.234 } 00:20:55.234 }, 00:20:55.234 "base_bdevs_list": [ 00:20:55.234 { 00:20:55.234 "name": "spare", 00:20:55.234 "uuid": "f4d52789-bc82-589a-841f-5bbd4508b18e", 00:20:55.234 "is_configured": true, 00:20:55.234 "data_offset": 256, 00:20:55.234 "data_size": 7936 00:20:55.234 }, 00:20:55.234 { 00:20:55.234 "name": "BaseBdev2", 00:20:55.234 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:55.234 "is_configured": true, 00:20:55.234 "data_offset": 256, 00:20:55.234 "data_size": 7936 00:20:55.234 } 00:20:55.234 ] 00:20:55.234 }' 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.234 [2024-10-01 13:55:05.166861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.234 [2024-10-01 13:55:05.252124] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:55.234 [2024-10-01 13:55:05.252471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.234 [2024-10-01 13:55:05.252583] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.234 [2024-10-01 13:55:05.252624] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.234 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.234 "name": "raid_bdev1", 00:20:55.234 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:55.234 "strip_size_kb": 0, 00:20:55.234 "state": "online", 00:20:55.234 "raid_level": "raid1", 00:20:55.234 "superblock": true, 00:20:55.234 "num_base_bdevs": 2, 00:20:55.234 "num_base_bdevs_discovered": 1, 00:20:55.234 "num_base_bdevs_operational": 1, 00:20:55.234 "base_bdevs_list": [ 00:20:55.234 { 00:20:55.234 "name": null, 00:20:55.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.234 "is_configured": false, 00:20:55.234 "data_offset": 0, 00:20:55.234 "data_size": 7936 00:20:55.234 }, 00:20:55.234 { 00:20:55.234 "name": "BaseBdev2", 00:20:55.234 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:55.235 "is_configured": true, 00:20:55.235 "data_offset": 256, 00:20:55.235 "data_size": 7936 00:20:55.235 } 00:20:55.235 ] 00:20:55.235 }' 00:20:55.235 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.235 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.804 "name": "raid_bdev1", 00:20:55.804 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:55.804 "strip_size_kb": 0, 00:20:55.804 "state": "online", 00:20:55.804 "raid_level": "raid1", 00:20:55.804 "superblock": true, 00:20:55.804 "num_base_bdevs": 2, 00:20:55.804 "num_base_bdevs_discovered": 1, 00:20:55.804 "num_base_bdevs_operational": 1, 00:20:55.804 "base_bdevs_list": [ 00:20:55.804 { 00:20:55.804 "name": null, 00:20:55.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.804 "is_configured": false, 00:20:55.804 "data_offset": 0, 00:20:55.804 "data_size": 7936 00:20:55.804 }, 00:20:55.804 { 00:20:55.804 "name": "BaseBdev2", 00:20:55.804 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:55.804 "is_configured": true, 00:20:55.804 "data_offset": 256, 00:20:55.804 "data_size": 7936 00:20:55.804 } 00:20:55.804 ] 00:20:55.804 }' 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.804 [2024-10-01 13:55:05.892539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:55.804 [2024-10-01 13:55:05.892618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.804 [2024-10-01 13:55:05.892648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:55.804 [2024-10-01 13:55:05.892660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.804 [2024-10-01 13:55:05.892895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.804 [2024-10-01 13:55:05.892910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:55.804 [2024-10-01 13:55:05.892971] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:55.804 [2024-10-01 13:55:05.892987] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:55.804 [2024-10-01 13:55:05.893003] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:55.804 [2024-10-01 13:55:05.893020] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:55.804 BaseBdev1 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.804 13:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.742 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.001 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.001 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.001 "name": "raid_bdev1", 00:20:57.001 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:57.001 "strip_size_kb": 0, 00:20:57.001 "state": "online", 00:20:57.001 "raid_level": "raid1", 00:20:57.001 "superblock": true, 00:20:57.001 "num_base_bdevs": 2, 00:20:57.001 "num_base_bdevs_discovered": 1, 00:20:57.001 "num_base_bdevs_operational": 1, 00:20:57.001 "base_bdevs_list": [ 00:20:57.001 { 00:20:57.001 "name": null, 00:20:57.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.001 "is_configured": false, 00:20:57.001 "data_offset": 0, 00:20:57.001 "data_size": 7936 00:20:57.001 }, 00:20:57.001 { 00:20:57.001 "name": "BaseBdev2", 00:20:57.001 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:57.001 "is_configured": true, 00:20:57.001 "data_offset": 256, 00:20:57.001 "data_size": 7936 00:20:57.001 } 00:20:57.001 ] 00:20:57.001 }' 00:20:57.001 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.001 13:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.260 "name": "raid_bdev1", 00:20:57.260 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:57.260 "strip_size_kb": 0, 00:20:57.260 "state": "online", 00:20:57.260 "raid_level": "raid1", 00:20:57.260 "superblock": true, 00:20:57.260 "num_base_bdevs": 2, 00:20:57.260 "num_base_bdevs_discovered": 1, 00:20:57.260 "num_base_bdevs_operational": 1, 00:20:57.260 "base_bdevs_list": [ 00:20:57.260 { 00:20:57.260 "name": null, 00:20:57.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.260 "is_configured": false, 00:20:57.260 "data_offset": 0, 00:20:57.260 "data_size": 7936 00:20:57.260 }, 00:20:57.260 { 00:20:57.260 "name": "BaseBdev2", 00:20:57.260 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:57.260 "is_configured": true, 00:20:57.260 "data_offset": 256, 00:20:57.260 "data_size": 7936 00:20:57.260 } 00:20:57.260 ] 00:20:57.260 }' 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:57.260 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.520 [2024-10-01 13:55:07.483711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.520 [2024-10-01 13:55:07.483887] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:57.520 [2024-10-01 13:55:07.483906] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:57.520 request: 00:20:57.520 { 00:20:57.520 "base_bdev": "BaseBdev1", 00:20:57.520 "raid_bdev": "raid_bdev1", 00:20:57.520 "method": "bdev_raid_add_base_bdev", 00:20:57.520 "req_id": 1 00:20:57.520 } 00:20:57.520 Got JSON-RPC error response 00:20:57.520 response: 00:20:57.520 { 00:20:57.520 "code": -22, 00:20:57.520 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:57.520 } 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:57.520 13:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.456 "name": "raid_bdev1", 00:20:58.456 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:58.456 "strip_size_kb": 0, 00:20:58.456 "state": "online", 00:20:58.456 "raid_level": "raid1", 00:20:58.456 "superblock": true, 00:20:58.456 "num_base_bdevs": 2, 00:20:58.456 "num_base_bdevs_discovered": 1, 00:20:58.456 "num_base_bdevs_operational": 1, 00:20:58.456 "base_bdevs_list": [ 00:20:58.456 { 00:20:58.456 "name": null, 00:20:58.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.456 "is_configured": false, 00:20:58.456 "data_offset": 0, 00:20:58.456 "data_size": 7936 00:20:58.456 }, 00:20:58.456 { 00:20:58.456 "name": "BaseBdev2", 00:20:58.456 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:58.456 "is_configured": true, 00:20:58.456 "data_offset": 256, 00:20:58.456 "data_size": 7936 00:20:58.456 } 00:20:58.456 ] 00:20:58.456 }' 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.456 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.027 13:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.027 "name": "raid_bdev1", 00:20:59.027 "uuid": "5c653dc1-c20f-44d8-bac7-25f89b21d703", 00:20:59.027 "strip_size_kb": 0, 00:20:59.027 "state": "online", 00:20:59.027 "raid_level": "raid1", 00:20:59.027 "superblock": true, 00:20:59.027 "num_base_bdevs": 2, 00:20:59.027 "num_base_bdevs_discovered": 1, 00:20:59.027 "num_base_bdevs_operational": 1, 00:20:59.027 "base_bdevs_list": [ 00:20:59.027 { 00:20:59.027 "name": null, 00:20:59.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.027 "is_configured": false, 00:20:59.027 "data_offset": 0, 00:20:59.027 "data_size": 7936 00:20:59.027 }, 00:20:59.027 { 00:20:59.027 "name": "BaseBdev2", 00:20:59.027 "uuid": "5f8624de-e10f-58a2-8a8d-1c94eb80dae8", 00:20:59.027 "is_configured": true, 00:20:59.027 "data_offset": 256, 00:20:59.027 "data_size": 7936 00:20:59.027 } 00:20:59.027 ] 00:20:59.027 }' 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87845 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87845 ']' 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87845 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87845 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:59.027 killing process with pid 87845 00:20:59.027 Received shutdown signal, test time was about 60.000000 seconds 00:20:59.027 00:20:59.027 Latency(us) 00:20:59.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.027 =================================================================================================================== 00:20:59.027 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87845' 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87845 00:20:59.027 [2024-10-01 13:55:09.161352] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.027 13:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87845 00:20:59.027 [2024-10-01 13:55:09.161508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.027 [2024-10-01 13:55:09.161566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.027 [2024-10-01 13:55:09.161596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:59.597 [2024-10-01 13:55:09.496200] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.975 13:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:00.975 00:21:00.975 real 0m20.605s 00:21:00.975 user 0m26.892s 00:21:00.975 sys 0m3.025s 00:21:00.975 13:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:00.975 13:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.975 ************************************ 00:21:00.975 END TEST raid_rebuild_test_sb_md_separate 00:21:00.975 ************************************ 00:21:00.975 13:55:10 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:00.975 13:55:10 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:00.975 13:55:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:00.975 13:55:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:00.975 13:55:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.975 ************************************ 00:21:00.975 START TEST raid_state_function_test_sb_md_interleaved 00:21:00.975 ************************************ 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88547 00:21:00.975 Process raid pid: 88547 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88547' 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88547 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88547 ']' 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.975 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.976 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.976 13:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.976 [2024-10-01 13:55:11.002042] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:00.976 [2024-10-01 13:55:11.002176] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.235 [2024-10-01 13:55:11.178356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.235 [2024-10-01 13:55:11.404228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.493 [2024-10-01 13:55:11.631710] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.494 [2024-10-01 13:55:11.631755] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.753 [2024-10-01 13:55:11.858590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.753 [2024-10-01 13:55:11.858784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.753 [2024-10-01 13:55:11.858813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:01.753 [2024-10-01 13:55:11.858829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.753 "name": "Existed_Raid", 00:21:01.753 "uuid": "71d445f9-3342-45c6-b528-22d5e3ef521c", 00:21:01.753 "strip_size_kb": 0, 00:21:01.753 "state": "configuring", 00:21:01.753 "raid_level": "raid1", 00:21:01.753 "superblock": true, 00:21:01.753 "num_base_bdevs": 2, 00:21:01.753 "num_base_bdevs_discovered": 0, 00:21:01.753 "num_base_bdevs_operational": 2, 00:21:01.753 "base_bdevs_list": [ 00:21:01.753 { 00:21:01.753 "name": "BaseBdev1", 00:21:01.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.753 "is_configured": false, 00:21:01.753 "data_offset": 0, 00:21:01.753 "data_size": 0 00:21:01.753 }, 00:21:01.753 { 00:21:01.753 "name": "BaseBdev2", 00:21:01.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.753 "is_configured": false, 00:21:01.753 "data_offset": 0, 00:21:01.753 "data_size": 0 00:21:01.753 } 00:21:01.753 ] 00:21:01.753 }' 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.753 13:55:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.323 [2024-10-01 13:55:12.281937] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:02.323 [2024-10-01 13:55:12.281981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.323 [2024-10-01 13:55:12.293964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:02.323 [2024-10-01 13:55:12.294147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:02.323 [2024-10-01 13:55:12.294170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.323 [2024-10-01 13:55:12.294189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.323 [2024-10-01 13:55:12.358116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:02.323 BaseBdev1 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.323 [ 00:21:02.323 { 00:21:02.323 "name": "BaseBdev1", 00:21:02.323 "aliases": [ 00:21:02.323 "8e646178-7ea3-4290-ac76-59553de27249" 00:21:02.323 ], 00:21:02.323 "product_name": "Malloc disk", 00:21:02.323 "block_size": 4128, 00:21:02.323 "num_blocks": 8192, 00:21:02.323 "uuid": "8e646178-7ea3-4290-ac76-59553de27249", 00:21:02.323 "md_size": 32, 00:21:02.323 "md_interleave": true, 00:21:02.323 "dif_type": 0, 00:21:02.323 "assigned_rate_limits": { 00:21:02.323 "rw_ios_per_sec": 0, 00:21:02.323 "rw_mbytes_per_sec": 0, 00:21:02.323 "r_mbytes_per_sec": 0, 00:21:02.323 "w_mbytes_per_sec": 0 00:21:02.323 }, 00:21:02.323 "claimed": true, 00:21:02.323 "claim_type": "exclusive_write", 00:21:02.323 "zoned": false, 00:21:02.323 "supported_io_types": { 00:21:02.323 "read": true, 00:21:02.323 "write": true, 00:21:02.323 "unmap": true, 00:21:02.323 "flush": true, 00:21:02.323 "reset": true, 00:21:02.323 "nvme_admin": false, 00:21:02.323 "nvme_io": false, 00:21:02.323 "nvme_io_md": false, 00:21:02.323 "write_zeroes": true, 00:21:02.323 "zcopy": true, 00:21:02.323 "get_zone_info": false, 00:21:02.323 "zone_management": false, 00:21:02.323 "zone_append": false, 00:21:02.323 "compare": false, 00:21:02.323 "compare_and_write": false, 00:21:02.323 "abort": true, 00:21:02.323 "seek_hole": false, 00:21:02.323 "seek_data": false, 00:21:02.323 "copy": true, 00:21:02.323 "nvme_iov_md": false 00:21:02.323 }, 00:21:02.323 "memory_domains": [ 00:21:02.323 { 00:21:02.323 "dma_device_id": "system", 00:21:02.323 "dma_device_type": 1 00:21:02.323 }, 00:21:02.323 { 00:21:02.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.323 "dma_device_type": 2 00:21:02.323 } 00:21:02.323 ], 00:21:02.323 "driver_specific": {} 00:21:02.323 } 00:21:02.323 ] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.323 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.323 "name": "Existed_Raid", 00:21:02.323 "uuid": "d57acb9d-2276-4779-a583-026334514dd4", 00:21:02.323 "strip_size_kb": 0, 00:21:02.323 "state": "configuring", 00:21:02.323 "raid_level": "raid1", 00:21:02.323 "superblock": true, 00:21:02.323 "num_base_bdevs": 2, 00:21:02.323 "num_base_bdevs_discovered": 1, 00:21:02.323 "num_base_bdevs_operational": 2, 00:21:02.323 "base_bdevs_list": [ 00:21:02.323 { 00:21:02.323 "name": "BaseBdev1", 00:21:02.323 "uuid": "8e646178-7ea3-4290-ac76-59553de27249", 00:21:02.323 "is_configured": true, 00:21:02.323 "data_offset": 256, 00:21:02.323 "data_size": 7936 00:21:02.323 }, 00:21:02.324 { 00:21:02.324 "name": "BaseBdev2", 00:21:02.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.324 "is_configured": false, 00:21:02.324 "data_offset": 0, 00:21:02.324 "data_size": 0 00:21:02.324 } 00:21:02.324 ] 00:21:02.324 }' 00:21:02.324 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.324 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.891 [2024-10-01 13:55:12.837557] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:02.891 [2024-10-01 13:55:12.837617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.891 [2024-10-01 13:55:12.849604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:02.891 [2024-10-01 13:55:12.851943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.891 [2024-10-01 13:55:12.852026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.891 "name": "Existed_Raid", 00:21:02.891 "uuid": "f24dcd1e-f896-4056-a56a-6128afd7d5a5", 00:21:02.891 "strip_size_kb": 0, 00:21:02.891 "state": "configuring", 00:21:02.891 "raid_level": "raid1", 00:21:02.891 "superblock": true, 00:21:02.891 "num_base_bdevs": 2, 00:21:02.891 "num_base_bdevs_discovered": 1, 00:21:02.891 "num_base_bdevs_operational": 2, 00:21:02.891 "base_bdevs_list": [ 00:21:02.891 { 00:21:02.891 "name": "BaseBdev1", 00:21:02.891 "uuid": "8e646178-7ea3-4290-ac76-59553de27249", 00:21:02.891 "is_configured": true, 00:21:02.891 "data_offset": 256, 00:21:02.891 "data_size": 7936 00:21:02.891 }, 00:21:02.891 { 00:21:02.891 "name": "BaseBdev2", 00:21:02.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.891 "is_configured": false, 00:21:02.891 "data_offset": 0, 00:21:02.891 "data_size": 0 00:21:02.891 } 00:21:02.891 ] 00:21:02.891 }' 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.891 13:55:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.150 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:03.150 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.150 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.409 [2024-10-01 13:55:13.346790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:03.409 [2024-10-01 13:55:13.347328] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:03.409 [2024-10-01 13:55:13.347351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:03.409 [2024-10-01 13:55:13.347513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:03.409 [2024-10-01 13:55:13.347593] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:03.409 [2024-10-01 13:55:13.347608] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:03.409 [2024-10-01 13:55:13.347680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.409 BaseBdev2 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.409 [ 00:21:03.409 { 00:21:03.409 "name": "BaseBdev2", 00:21:03.409 "aliases": [ 00:21:03.409 "a8061c81-6bbd-4967-9fd8-3d7c618fde28" 00:21:03.409 ], 00:21:03.409 "product_name": "Malloc disk", 00:21:03.409 "block_size": 4128, 00:21:03.409 "num_blocks": 8192, 00:21:03.409 "uuid": "a8061c81-6bbd-4967-9fd8-3d7c618fde28", 00:21:03.409 "md_size": 32, 00:21:03.409 "md_interleave": true, 00:21:03.409 "dif_type": 0, 00:21:03.409 "assigned_rate_limits": { 00:21:03.409 "rw_ios_per_sec": 0, 00:21:03.409 "rw_mbytes_per_sec": 0, 00:21:03.409 "r_mbytes_per_sec": 0, 00:21:03.409 "w_mbytes_per_sec": 0 00:21:03.409 }, 00:21:03.409 "claimed": true, 00:21:03.409 "claim_type": "exclusive_write", 00:21:03.409 "zoned": false, 00:21:03.409 "supported_io_types": { 00:21:03.409 "read": true, 00:21:03.409 "write": true, 00:21:03.409 "unmap": true, 00:21:03.409 "flush": true, 00:21:03.409 "reset": true, 00:21:03.409 "nvme_admin": false, 00:21:03.409 "nvme_io": false, 00:21:03.409 "nvme_io_md": false, 00:21:03.409 "write_zeroes": true, 00:21:03.409 "zcopy": true, 00:21:03.409 "get_zone_info": false, 00:21:03.409 "zone_management": false, 00:21:03.409 "zone_append": false, 00:21:03.409 "compare": false, 00:21:03.409 "compare_and_write": false, 00:21:03.409 "abort": true, 00:21:03.409 "seek_hole": false, 00:21:03.409 "seek_data": false, 00:21:03.409 "copy": true, 00:21:03.409 "nvme_iov_md": false 00:21:03.409 }, 00:21:03.409 "memory_domains": [ 00:21:03.409 { 00:21:03.409 "dma_device_id": "system", 00:21:03.409 "dma_device_type": 1 00:21:03.409 }, 00:21:03.409 { 00:21:03.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.409 "dma_device_type": 2 00:21:03.409 } 00:21:03.409 ], 00:21:03.409 "driver_specific": {} 00:21:03.409 } 00:21:03.409 ] 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.409 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.410 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.410 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.410 "name": "Existed_Raid", 00:21:03.410 "uuid": "f24dcd1e-f896-4056-a56a-6128afd7d5a5", 00:21:03.410 "strip_size_kb": 0, 00:21:03.410 "state": "online", 00:21:03.410 "raid_level": "raid1", 00:21:03.410 "superblock": true, 00:21:03.410 "num_base_bdevs": 2, 00:21:03.410 "num_base_bdevs_discovered": 2, 00:21:03.410 "num_base_bdevs_operational": 2, 00:21:03.410 "base_bdevs_list": [ 00:21:03.410 { 00:21:03.410 "name": "BaseBdev1", 00:21:03.410 "uuid": "8e646178-7ea3-4290-ac76-59553de27249", 00:21:03.410 "is_configured": true, 00:21:03.410 "data_offset": 256, 00:21:03.410 "data_size": 7936 00:21:03.410 }, 00:21:03.410 { 00:21:03.410 "name": "BaseBdev2", 00:21:03.410 "uuid": "a8061c81-6bbd-4967-9fd8-3d7c618fde28", 00:21:03.410 "is_configured": true, 00:21:03.410 "data_offset": 256, 00:21:03.410 "data_size": 7936 00:21:03.410 } 00:21:03.410 ] 00:21:03.410 }' 00:21:03.410 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.410 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.668 [2024-10-01 13:55:13.814615] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.668 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.669 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.669 "name": "Existed_Raid", 00:21:03.669 "aliases": [ 00:21:03.669 "f24dcd1e-f896-4056-a56a-6128afd7d5a5" 00:21:03.669 ], 00:21:03.669 "product_name": "Raid Volume", 00:21:03.669 "block_size": 4128, 00:21:03.669 "num_blocks": 7936, 00:21:03.669 "uuid": "f24dcd1e-f896-4056-a56a-6128afd7d5a5", 00:21:03.669 "md_size": 32, 00:21:03.669 "md_interleave": true, 00:21:03.669 "dif_type": 0, 00:21:03.669 "assigned_rate_limits": { 00:21:03.669 "rw_ios_per_sec": 0, 00:21:03.669 "rw_mbytes_per_sec": 0, 00:21:03.669 "r_mbytes_per_sec": 0, 00:21:03.669 "w_mbytes_per_sec": 0 00:21:03.669 }, 00:21:03.669 "claimed": false, 00:21:03.669 "zoned": false, 00:21:03.669 "supported_io_types": { 00:21:03.669 "read": true, 00:21:03.669 "write": true, 00:21:03.669 "unmap": false, 00:21:03.669 "flush": false, 00:21:03.669 "reset": true, 00:21:03.669 "nvme_admin": false, 00:21:03.669 "nvme_io": false, 00:21:03.669 "nvme_io_md": false, 00:21:03.669 "write_zeroes": true, 00:21:03.669 "zcopy": false, 00:21:03.669 "get_zone_info": false, 00:21:03.669 "zone_management": false, 00:21:03.669 "zone_append": false, 00:21:03.669 "compare": false, 00:21:03.669 "compare_and_write": false, 00:21:03.669 "abort": false, 00:21:03.669 "seek_hole": false, 00:21:03.669 "seek_data": false, 00:21:03.669 "copy": false, 00:21:03.669 "nvme_iov_md": false 00:21:03.669 }, 00:21:03.669 "memory_domains": [ 00:21:03.669 { 00:21:03.669 "dma_device_id": "system", 00:21:03.669 "dma_device_type": 1 00:21:03.669 }, 00:21:03.669 { 00:21:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.669 "dma_device_type": 2 00:21:03.669 }, 00:21:03.669 { 00:21:03.669 "dma_device_id": "system", 00:21:03.669 "dma_device_type": 1 00:21:03.669 }, 00:21:03.669 { 00:21:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.669 "dma_device_type": 2 00:21:03.669 } 00:21:03.669 ], 00:21:03.669 "driver_specific": { 00:21:03.669 "raid": { 00:21:03.669 "uuid": "f24dcd1e-f896-4056-a56a-6128afd7d5a5", 00:21:03.669 "strip_size_kb": 0, 00:21:03.669 "state": "online", 00:21:03.669 "raid_level": "raid1", 00:21:03.669 "superblock": true, 00:21:03.669 "num_base_bdevs": 2, 00:21:03.669 "num_base_bdevs_discovered": 2, 00:21:03.669 "num_base_bdevs_operational": 2, 00:21:03.669 "base_bdevs_list": [ 00:21:03.669 { 00:21:03.669 "name": "BaseBdev1", 00:21:03.669 "uuid": "8e646178-7ea3-4290-ac76-59553de27249", 00:21:03.669 "is_configured": true, 00:21:03.669 "data_offset": 256, 00:21:03.669 "data_size": 7936 00:21:03.669 }, 00:21:03.669 { 00:21:03.669 "name": "BaseBdev2", 00:21:03.669 "uuid": "a8061c81-6bbd-4967-9fd8-3d7c618fde28", 00:21:03.669 "is_configured": true, 00:21:03.669 "data_offset": 256, 00:21:03.669 "data_size": 7936 00:21:03.669 } 00:21:03.669 ] 00:21:03.669 } 00:21:03.669 } 00:21:03.669 }' 00:21:03.669 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:03.927 BaseBdev2' 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.927 13:55:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.927 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:03.927 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:03.927 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:03.927 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.928 [2024-10-01 13:55:14.046134] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.186 "name": "Existed_Raid", 00:21:04.186 "uuid": "f24dcd1e-f896-4056-a56a-6128afd7d5a5", 00:21:04.186 "strip_size_kb": 0, 00:21:04.186 "state": "online", 00:21:04.186 "raid_level": "raid1", 00:21:04.186 "superblock": true, 00:21:04.186 "num_base_bdevs": 2, 00:21:04.186 "num_base_bdevs_discovered": 1, 00:21:04.186 "num_base_bdevs_operational": 1, 00:21:04.186 "base_bdevs_list": [ 00:21:04.186 { 00:21:04.186 "name": null, 00:21:04.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.186 "is_configured": false, 00:21:04.186 "data_offset": 0, 00:21:04.186 "data_size": 7936 00:21:04.186 }, 00:21:04.186 { 00:21:04.186 "name": "BaseBdev2", 00:21:04.186 "uuid": "a8061c81-6bbd-4967-9fd8-3d7c618fde28", 00:21:04.186 "is_configured": true, 00:21:04.186 "data_offset": 256, 00:21:04.186 "data_size": 7936 00:21:04.186 } 00:21:04.186 ] 00:21:04.186 }' 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.186 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.444 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:04.444 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:04.444 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.444 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.444 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:04.444 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.444 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.701 [2024-10-01 13:55:14.657429] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:04.701 [2024-10-01 13:55:14.658506] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.701 [2024-10-01 13:55:14.763330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.701 [2024-10-01 13:55:14.763387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.701 [2024-10-01 13:55:14.763422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88547 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88547 ']' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88547 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88547 00:21:04.701 killing process with pid 88547 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88547' 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88547 00:21:04.701 [2024-10-01 13:55:14.856764] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:04.701 13:55:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88547 00:21:04.701 [2024-10-01 13:55:14.875582] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:06.074 13:55:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:06.074 00:21:06.074 real 0m5.338s 00:21:06.074 user 0m7.475s 00:21:06.074 sys 0m1.018s 00:21:06.074 ************************************ 00:21:06.074 END TEST raid_state_function_test_sb_md_interleaved 00:21:06.074 ************************************ 00:21:06.074 13:55:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:06.074 13:55:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.333 13:55:16 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:06.333 13:55:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:06.333 13:55:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:06.333 13:55:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:06.333 ************************************ 00:21:06.333 START TEST raid_superblock_test_md_interleaved 00:21:06.333 ************************************ 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88799 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88799 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88799 ']' 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.333 13:55:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.333 [2024-10-01 13:55:16.405536] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:06.333 [2024-10-01 13:55:16.405676] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88799 ] 00:21:06.591 [2024-10-01 13:55:16.581575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.887 [2024-10-01 13:55:16.811165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.887 [2024-10-01 13:55:17.033666] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.887 [2024-10-01 13:55:17.033896] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.145 malloc1 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.145 [2024-10-01 13:55:17.314535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:07.145 [2024-10-01 13:55:17.314709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.145 [2024-10-01 13:55:17.314769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:07.145 [2024-10-01 13:55:17.314852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.145 [2024-10-01 13:55:17.316978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.145 [2024-10-01 13:55:17.317112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:07.145 pt1 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.145 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.404 malloc2 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.404 [2024-10-01 13:55:17.380530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:07.404 [2024-10-01 13:55:17.380704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.404 [2024-10-01 13:55:17.380763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:07.404 [2024-10-01 13:55:17.380777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.404 [2024-10-01 13:55:17.382870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.404 [2024-10-01 13:55:17.382909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:07.404 pt2 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.404 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.404 [2024-10-01 13:55:17.392600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:07.404 [2024-10-01 13:55:17.394625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:07.404 [2024-10-01 13:55:17.394813] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:07.404 [2024-10-01 13:55:17.394828] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:07.404 [2024-10-01 13:55:17.394906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:07.404 [2024-10-01 13:55:17.394967] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:07.404 [2024-10-01 13:55:17.394984] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:07.404 [2024-10-01 13:55:17.395053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.405 "name": "raid_bdev1", 00:21:07.405 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:07.405 "strip_size_kb": 0, 00:21:07.405 "state": "online", 00:21:07.405 "raid_level": "raid1", 00:21:07.405 "superblock": true, 00:21:07.405 "num_base_bdevs": 2, 00:21:07.405 "num_base_bdevs_discovered": 2, 00:21:07.405 "num_base_bdevs_operational": 2, 00:21:07.405 "base_bdevs_list": [ 00:21:07.405 { 00:21:07.405 "name": "pt1", 00:21:07.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:07.405 "is_configured": true, 00:21:07.405 "data_offset": 256, 00:21:07.405 "data_size": 7936 00:21:07.405 }, 00:21:07.405 { 00:21:07.405 "name": "pt2", 00:21:07.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.405 "is_configured": true, 00:21:07.405 "data_offset": 256, 00:21:07.405 "data_size": 7936 00:21:07.405 } 00:21:07.405 ] 00:21:07.405 }' 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.405 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.663 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.663 [2024-10-01 13:55:17.828246] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.921 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.921 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:07.921 "name": "raid_bdev1", 00:21:07.921 "aliases": [ 00:21:07.921 "f4ccd431-aba7-4b59-aed3-a73415e8888c" 00:21:07.921 ], 00:21:07.921 "product_name": "Raid Volume", 00:21:07.921 "block_size": 4128, 00:21:07.921 "num_blocks": 7936, 00:21:07.921 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:07.921 "md_size": 32, 00:21:07.921 "md_interleave": true, 00:21:07.921 "dif_type": 0, 00:21:07.921 "assigned_rate_limits": { 00:21:07.921 "rw_ios_per_sec": 0, 00:21:07.921 "rw_mbytes_per_sec": 0, 00:21:07.921 "r_mbytes_per_sec": 0, 00:21:07.921 "w_mbytes_per_sec": 0 00:21:07.921 }, 00:21:07.921 "claimed": false, 00:21:07.921 "zoned": false, 00:21:07.921 "supported_io_types": { 00:21:07.921 "read": true, 00:21:07.921 "write": true, 00:21:07.921 "unmap": false, 00:21:07.921 "flush": false, 00:21:07.921 "reset": true, 00:21:07.921 "nvme_admin": false, 00:21:07.921 "nvme_io": false, 00:21:07.921 "nvme_io_md": false, 00:21:07.921 "write_zeroes": true, 00:21:07.921 "zcopy": false, 00:21:07.921 "get_zone_info": false, 00:21:07.921 "zone_management": false, 00:21:07.921 "zone_append": false, 00:21:07.921 "compare": false, 00:21:07.921 "compare_and_write": false, 00:21:07.921 "abort": false, 00:21:07.921 "seek_hole": false, 00:21:07.921 "seek_data": false, 00:21:07.921 "copy": false, 00:21:07.921 "nvme_iov_md": false 00:21:07.921 }, 00:21:07.921 "memory_domains": [ 00:21:07.921 { 00:21:07.921 "dma_device_id": "system", 00:21:07.921 "dma_device_type": 1 00:21:07.921 }, 00:21:07.921 { 00:21:07.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.921 "dma_device_type": 2 00:21:07.921 }, 00:21:07.921 { 00:21:07.921 "dma_device_id": "system", 00:21:07.921 "dma_device_type": 1 00:21:07.921 }, 00:21:07.921 { 00:21:07.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.922 "dma_device_type": 2 00:21:07.922 } 00:21:07.922 ], 00:21:07.922 "driver_specific": { 00:21:07.922 "raid": { 00:21:07.922 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:07.922 "strip_size_kb": 0, 00:21:07.922 "state": "online", 00:21:07.922 "raid_level": "raid1", 00:21:07.922 "superblock": true, 00:21:07.922 "num_base_bdevs": 2, 00:21:07.922 "num_base_bdevs_discovered": 2, 00:21:07.922 "num_base_bdevs_operational": 2, 00:21:07.922 "base_bdevs_list": [ 00:21:07.922 { 00:21:07.922 "name": "pt1", 00:21:07.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:07.922 "is_configured": true, 00:21:07.922 "data_offset": 256, 00:21:07.922 "data_size": 7936 00:21:07.922 }, 00:21:07.922 { 00:21:07.922 "name": "pt2", 00:21:07.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.922 "is_configured": true, 00:21:07.922 "data_offset": 256, 00:21:07.922 "data_size": 7936 00:21:07.922 } 00:21:07.922 ] 00:21:07.922 } 00:21:07.922 } 00:21:07.922 }' 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:07.922 pt2' 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.922 13:55:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.922 [2024-10-01 13:55:18.047938] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f4ccd431-aba7-4b59-aed3-a73415e8888c 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f4ccd431-aba7-4b59-aed3-a73415e8888c ']' 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.922 [2024-10-01 13:55:18.091649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.922 [2024-10-01 13:55:18.091679] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.922 [2024-10-01 13:55:18.091770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.922 [2024-10-01 13:55:18.091832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.922 [2024-10-01 13:55:18.091864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.922 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.180 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.180 [2024-10-01 13:55:18.227613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:08.180 [2024-10-01 13:55:18.229908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:08.180 [2024-10-01 13:55:18.229999] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:08.180 [2024-10-01 13:55:18.230064] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:08.180 [2024-10-01 13:55:18.230085] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.180 [2024-10-01 13:55:18.230098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:08.180 request: 00:21:08.180 { 00:21:08.180 "name": "raid_bdev1", 00:21:08.180 "raid_level": "raid1", 00:21:08.180 "base_bdevs": [ 00:21:08.180 "malloc1", 00:21:08.180 "malloc2" 00:21:08.180 ], 00:21:08.180 "superblock": false, 00:21:08.180 "method": "bdev_raid_create", 00:21:08.180 "req_id": 1 00:21:08.180 } 00:21:08.180 Got JSON-RPC error response 00:21:08.181 response: 00:21:08.181 { 00:21:08.181 "code": -17, 00:21:08.181 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:08.181 } 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.181 [2024-10-01 13:55:18.299439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.181 [2024-10-01 13:55:18.299554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.181 [2024-10-01 13:55:18.299575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:08.181 [2024-10-01 13:55:18.299590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.181 [2024-10-01 13:55:18.301904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.181 [2024-10-01 13:55:18.301949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.181 [2024-10-01 13:55:18.302028] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:08.181 [2024-10-01 13:55:18.302103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:08.181 pt1 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.181 "name": "raid_bdev1", 00:21:08.181 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:08.181 "strip_size_kb": 0, 00:21:08.181 "state": "configuring", 00:21:08.181 "raid_level": "raid1", 00:21:08.181 "superblock": true, 00:21:08.181 "num_base_bdevs": 2, 00:21:08.181 "num_base_bdevs_discovered": 1, 00:21:08.181 "num_base_bdevs_operational": 2, 00:21:08.181 "base_bdevs_list": [ 00:21:08.181 { 00:21:08.181 "name": "pt1", 00:21:08.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:08.181 "is_configured": true, 00:21:08.181 "data_offset": 256, 00:21:08.181 "data_size": 7936 00:21:08.181 }, 00:21:08.181 { 00:21:08.181 "name": null, 00:21:08.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.181 "is_configured": false, 00:21:08.181 "data_offset": 256, 00:21:08.181 "data_size": 7936 00:21:08.181 } 00:21:08.181 ] 00:21:08.181 }' 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.181 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.748 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:08.748 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.749 [2024-10-01 13:55:18.750760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:08.749 [2024-10-01 13:55:18.750844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.749 [2024-10-01 13:55:18.750866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:08.749 [2024-10-01 13:55:18.750880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.749 [2024-10-01 13:55:18.751059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.749 [2024-10-01 13:55:18.751079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:08.749 [2024-10-01 13:55:18.751131] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:08.749 [2024-10-01 13:55:18.751158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.749 [2024-10-01 13:55:18.751241] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:08.749 [2024-10-01 13:55:18.751254] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:08.749 [2024-10-01 13:55:18.751333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:08.749 [2024-10-01 13:55:18.751417] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:08.749 [2024-10-01 13:55:18.751429] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:08.749 [2024-10-01 13:55:18.751530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.749 pt2 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.749 "name": "raid_bdev1", 00:21:08.749 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:08.749 "strip_size_kb": 0, 00:21:08.749 "state": "online", 00:21:08.749 "raid_level": "raid1", 00:21:08.749 "superblock": true, 00:21:08.749 "num_base_bdevs": 2, 00:21:08.749 "num_base_bdevs_discovered": 2, 00:21:08.749 "num_base_bdevs_operational": 2, 00:21:08.749 "base_bdevs_list": [ 00:21:08.749 { 00:21:08.749 "name": "pt1", 00:21:08.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:08.749 "is_configured": true, 00:21:08.749 "data_offset": 256, 00:21:08.749 "data_size": 7936 00:21:08.749 }, 00:21:08.749 { 00:21:08.749 "name": "pt2", 00:21:08.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.749 "is_configured": true, 00:21:08.749 "data_offset": 256, 00:21:08.749 "data_size": 7936 00:21:08.749 } 00:21:08.749 ] 00:21:08.749 }' 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.749 13:55:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.007 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.007 [2024-10-01 13:55:19.186489] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:09.265 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.265 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:09.265 "name": "raid_bdev1", 00:21:09.265 "aliases": [ 00:21:09.265 "f4ccd431-aba7-4b59-aed3-a73415e8888c" 00:21:09.265 ], 00:21:09.265 "product_name": "Raid Volume", 00:21:09.265 "block_size": 4128, 00:21:09.265 "num_blocks": 7936, 00:21:09.265 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:09.265 "md_size": 32, 00:21:09.265 "md_interleave": true, 00:21:09.265 "dif_type": 0, 00:21:09.265 "assigned_rate_limits": { 00:21:09.265 "rw_ios_per_sec": 0, 00:21:09.265 "rw_mbytes_per_sec": 0, 00:21:09.265 "r_mbytes_per_sec": 0, 00:21:09.265 "w_mbytes_per_sec": 0 00:21:09.265 }, 00:21:09.265 "claimed": false, 00:21:09.265 "zoned": false, 00:21:09.265 "supported_io_types": { 00:21:09.265 "read": true, 00:21:09.265 "write": true, 00:21:09.265 "unmap": false, 00:21:09.265 "flush": false, 00:21:09.265 "reset": true, 00:21:09.265 "nvme_admin": false, 00:21:09.265 "nvme_io": false, 00:21:09.265 "nvme_io_md": false, 00:21:09.265 "write_zeroes": true, 00:21:09.265 "zcopy": false, 00:21:09.265 "get_zone_info": false, 00:21:09.265 "zone_management": false, 00:21:09.265 "zone_append": false, 00:21:09.265 "compare": false, 00:21:09.265 "compare_and_write": false, 00:21:09.265 "abort": false, 00:21:09.265 "seek_hole": false, 00:21:09.265 "seek_data": false, 00:21:09.265 "copy": false, 00:21:09.265 "nvme_iov_md": false 00:21:09.265 }, 00:21:09.265 "memory_domains": [ 00:21:09.265 { 00:21:09.266 "dma_device_id": "system", 00:21:09.266 "dma_device_type": 1 00:21:09.266 }, 00:21:09.266 { 00:21:09.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.266 "dma_device_type": 2 00:21:09.266 }, 00:21:09.266 { 00:21:09.266 "dma_device_id": "system", 00:21:09.266 "dma_device_type": 1 00:21:09.266 }, 00:21:09.266 { 00:21:09.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.266 "dma_device_type": 2 00:21:09.266 } 00:21:09.266 ], 00:21:09.266 "driver_specific": { 00:21:09.266 "raid": { 00:21:09.266 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:09.266 "strip_size_kb": 0, 00:21:09.266 "state": "online", 00:21:09.266 "raid_level": "raid1", 00:21:09.266 "superblock": true, 00:21:09.266 "num_base_bdevs": 2, 00:21:09.266 "num_base_bdevs_discovered": 2, 00:21:09.266 "num_base_bdevs_operational": 2, 00:21:09.266 "base_bdevs_list": [ 00:21:09.266 { 00:21:09.266 "name": "pt1", 00:21:09.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:09.266 "is_configured": true, 00:21:09.266 "data_offset": 256, 00:21:09.266 "data_size": 7936 00:21:09.266 }, 00:21:09.266 { 00:21:09.266 "name": "pt2", 00:21:09.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.266 "is_configured": true, 00:21:09.266 "data_offset": 256, 00:21:09.266 "data_size": 7936 00:21:09.266 } 00:21:09.266 ] 00:21:09.266 } 00:21:09.266 } 00:21:09.266 }' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:09.266 pt2' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.266 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.266 [2024-10-01 13:55:19.430166] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f4ccd431-aba7-4b59-aed3-a73415e8888c '!=' f4ccd431-aba7-4b59-aed3-a73415e8888c ']' 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.523 [2024-10-01 13:55:19.473911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.523 "name": "raid_bdev1", 00:21:09.523 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:09.523 "strip_size_kb": 0, 00:21:09.523 "state": "online", 00:21:09.523 "raid_level": "raid1", 00:21:09.523 "superblock": true, 00:21:09.523 "num_base_bdevs": 2, 00:21:09.523 "num_base_bdevs_discovered": 1, 00:21:09.523 "num_base_bdevs_operational": 1, 00:21:09.523 "base_bdevs_list": [ 00:21:09.523 { 00:21:09.523 "name": null, 00:21:09.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.523 "is_configured": false, 00:21:09.523 "data_offset": 0, 00:21:09.523 "data_size": 7936 00:21:09.523 }, 00:21:09.523 { 00:21:09.523 "name": "pt2", 00:21:09.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.523 "is_configured": true, 00:21:09.523 "data_offset": 256, 00:21:09.523 "data_size": 7936 00:21:09.523 } 00:21:09.523 ] 00:21:09.523 }' 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.523 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.781 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:09.781 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.781 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.781 [2024-10-01 13:55:19.969159] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:09.781 [2024-10-01 13:55:19.969191] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:09.781 [2024-10-01 13:55:19.969285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.781 [2024-10-01 13:55:19.969339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:09.781 [2024-10-01 13:55:19.969354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:10.040 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.040 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.040 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.040 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.040 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:10.040 13:55:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.040 [2024-10-01 13:55:20.045218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:10.040 [2024-10-01 13:55:20.045508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.040 [2024-10-01 13:55:20.045541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:10.040 [2024-10-01 13:55:20.045566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.040 [2024-10-01 13:55:20.047946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.040 [2024-10-01 13:55:20.047983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:10.040 [2024-10-01 13:55:20.048047] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:10.040 [2024-10-01 13:55:20.048101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:10.040 [2024-10-01 13:55:20.048171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:10.040 [2024-10-01 13:55:20.048186] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:10.040 [2024-10-01 13:55:20.048292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:10.040 [2024-10-01 13:55:20.048357] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:10.040 [2024-10-01 13:55:20.048366] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:10.040 [2024-10-01 13:55:20.048483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.040 pt2 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.040 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.041 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.041 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.041 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.041 "name": "raid_bdev1", 00:21:10.041 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:10.041 "strip_size_kb": 0, 00:21:10.041 "state": "online", 00:21:10.041 "raid_level": "raid1", 00:21:10.041 "superblock": true, 00:21:10.041 "num_base_bdevs": 2, 00:21:10.041 "num_base_bdevs_discovered": 1, 00:21:10.041 "num_base_bdevs_operational": 1, 00:21:10.041 "base_bdevs_list": [ 00:21:10.041 { 00:21:10.041 "name": null, 00:21:10.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.041 "is_configured": false, 00:21:10.041 "data_offset": 256, 00:21:10.041 "data_size": 7936 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "name": "pt2", 00:21:10.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.041 "is_configured": true, 00:21:10.041 "data_offset": 256, 00:21:10.041 "data_size": 7936 00:21:10.041 } 00:21:10.041 ] 00:21:10.041 }' 00:21:10.041 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.041 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.299 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:10.299 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.299 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.299 [2024-10-01 13:55:20.488424] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:10.299 [2024-10-01 13:55:20.488459] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:10.299 [2024-10-01 13:55:20.488540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.299 [2024-10-01 13:55:20.488596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.299 [2024-10-01 13:55:20.488619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:10.558 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.559 [2024-10-01 13:55:20.548355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:10.559 [2024-10-01 13:55:20.548579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.559 [2024-10-01 13:55:20.548615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:10.559 [2024-10-01 13:55:20.548628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.559 [2024-10-01 13:55:20.551054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.559 [2024-10-01 13:55:20.551094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:10.559 [2024-10-01 13:55:20.551161] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:10.559 [2024-10-01 13:55:20.551211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:10.559 [2024-10-01 13:55:20.551313] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:10.559 [2024-10-01 13:55:20.551325] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:10.559 [2024-10-01 13:55:20.551349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:10.559 [2024-10-01 13:55:20.551454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:10.559 [2024-10-01 13:55:20.551548] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:10.559 [2024-10-01 13:55:20.551558] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:10.559 [2024-10-01 13:55:20.551628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:10.559 [2024-10-01 13:55:20.551689] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:10.559 [2024-10-01 13:55:20.551702] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:10.559 [2024-10-01 13:55:20.551773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.559 pt1 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.559 "name": "raid_bdev1", 00:21:10.559 "uuid": "f4ccd431-aba7-4b59-aed3-a73415e8888c", 00:21:10.559 "strip_size_kb": 0, 00:21:10.559 "state": "online", 00:21:10.559 "raid_level": "raid1", 00:21:10.559 "superblock": true, 00:21:10.559 "num_base_bdevs": 2, 00:21:10.559 "num_base_bdevs_discovered": 1, 00:21:10.559 "num_base_bdevs_operational": 1, 00:21:10.559 "base_bdevs_list": [ 00:21:10.559 { 00:21:10.559 "name": null, 00:21:10.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.559 "is_configured": false, 00:21:10.559 "data_offset": 256, 00:21:10.559 "data_size": 7936 00:21:10.559 }, 00:21:10.559 { 00:21:10.559 "name": "pt2", 00:21:10.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.559 "is_configured": true, 00:21:10.559 "data_offset": 256, 00:21:10.559 "data_size": 7936 00:21:10.559 } 00:21:10.559 ] 00:21:10.559 }' 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.559 13:55:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.129 [2024-10-01 13:55:21.087908] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f4ccd431-aba7-4b59-aed3-a73415e8888c '!=' f4ccd431-aba7-4b59-aed3-a73415e8888c ']' 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88799 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88799 ']' 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88799 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88799 00:21:11.129 killing process with pid 88799 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88799' 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 88799 00:21:11.129 [2024-10-01 13:55:21.185218] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.129 [2024-10-01 13:55:21.185332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.129 13:55:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 88799 00:21:11.129 [2024-10-01 13:55:21.185384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.129 [2024-10-01 13:55:21.185417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:11.388 [2024-10-01 13:55:21.412026] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.765 13:55:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:12.765 00:21:12.765 real 0m6.466s 00:21:12.765 user 0m9.585s 00:21:12.765 sys 0m1.351s 00:21:12.765 13:55:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.765 ************************************ 00:21:12.765 END TEST raid_superblock_test_md_interleaved 00:21:12.765 ************************************ 00:21:12.765 13:55:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.765 13:55:22 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:12.765 13:55:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:12.765 13:55:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.765 13:55:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.765 ************************************ 00:21:12.765 START TEST raid_rebuild_test_sb_md_interleaved 00:21:12.765 ************************************ 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89122 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89122 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89122 ']' 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.765 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.766 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.766 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.766 13:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.023 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:13.023 Zero copy mechanism will not be used. 00:21:13.023 [2024-10-01 13:55:22.965023] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:13.023 [2024-10-01 13:55:22.965150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89122 ] 00:21:13.023 [2024-10-01 13:55:23.138362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.295 [2024-10-01 13:55:23.367127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.565 [2024-10-01 13:55:23.587987] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.565 [2024-10-01 13:55:23.588267] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.825 BaseBdev1_malloc 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.825 [2024-10-01 13:55:23.892981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:13.825 [2024-10-01 13:55:23.893223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.825 [2024-10-01 13:55:23.893267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:13.825 [2024-10-01 13:55:23.893286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.825 [2024-10-01 13:55:23.895764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.825 [2024-10-01 13:55:23.895807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:13.825 BaseBdev1 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.825 BaseBdev2_malloc 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.825 [2024-10-01 13:55:23.962661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:13.825 [2024-10-01 13:55:23.962866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.825 [2024-10-01 13:55:23.962901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:13.825 [2024-10-01 13:55:23.962917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.825 [2024-10-01 13:55:23.965204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.825 [2024-10-01 13:55:23.965249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:13.825 BaseBdev2 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.825 13:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.117 spare_malloc 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.117 spare_delay 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.117 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.117 [2024-10-01 13:55:24.033694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:14.117 [2024-10-01 13:55:24.033765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.117 [2024-10-01 13:55:24.033793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:14.117 [2024-10-01 13:55:24.033808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.117 [2024-10-01 13:55:24.036096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.118 [2024-10-01 13:55:24.036143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:14.118 spare 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.118 [2024-10-01 13:55:24.045776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.118 [2024-10-01 13:55:24.048027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.118 [2024-10-01 13:55:24.048363] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:14.118 [2024-10-01 13:55:24.048388] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:14.118 [2024-10-01 13:55:24.048512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:14.118 [2024-10-01 13:55:24.048593] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:14.118 [2024-10-01 13:55:24.048604] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:14.118 [2024-10-01 13:55:24.048689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.118 "name": "raid_bdev1", 00:21:14.118 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:14.118 "strip_size_kb": 0, 00:21:14.118 "state": "online", 00:21:14.118 "raid_level": "raid1", 00:21:14.118 "superblock": true, 00:21:14.118 "num_base_bdevs": 2, 00:21:14.118 "num_base_bdevs_discovered": 2, 00:21:14.118 "num_base_bdevs_operational": 2, 00:21:14.118 "base_bdevs_list": [ 00:21:14.118 { 00:21:14.118 "name": "BaseBdev1", 00:21:14.118 "uuid": "cbbabb8d-733d-5a37-b979-4aa253d3ca26", 00:21:14.118 "is_configured": true, 00:21:14.118 "data_offset": 256, 00:21:14.118 "data_size": 7936 00:21:14.118 }, 00:21:14.118 { 00:21:14.118 "name": "BaseBdev2", 00:21:14.118 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:14.118 "is_configured": true, 00:21:14.118 "data_offset": 256, 00:21:14.118 "data_size": 7936 00:21:14.118 } 00:21:14.118 ] 00:21:14.118 }' 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.118 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.377 [2024-10-01 13:55:24.493445] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.377 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.635 [2024-10-01 13:55:24.589024] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.635 "name": "raid_bdev1", 00:21:14.635 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:14.635 "strip_size_kb": 0, 00:21:14.635 "state": "online", 00:21:14.635 "raid_level": "raid1", 00:21:14.635 "superblock": true, 00:21:14.635 "num_base_bdevs": 2, 00:21:14.635 "num_base_bdevs_discovered": 1, 00:21:14.635 "num_base_bdevs_operational": 1, 00:21:14.635 "base_bdevs_list": [ 00:21:14.635 { 00:21:14.635 "name": null, 00:21:14.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.635 "is_configured": false, 00:21:14.635 "data_offset": 0, 00:21:14.635 "data_size": 7936 00:21:14.635 }, 00:21:14.635 { 00:21:14.635 "name": "BaseBdev2", 00:21:14.635 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:14.635 "is_configured": true, 00:21:14.635 "data_offset": 256, 00:21:14.635 "data_size": 7936 00:21:14.635 } 00:21:14.635 ] 00:21:14.635 }' 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.635 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.894 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:14.894 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.894 13:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.894 [2024-10-01 13:55:24.996481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.894 [2024-10-01 13:55:25.013848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:14.894 13:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.894 13:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:14.894 [2024-10-01 13:55:25.015982] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.269 "name": "raid_bdev1", 00:21:16.269 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:16.269 "strip_size_kb": 0, 00:21:16.269 "state": "online", 00:21:16.269 "raid_level": "raid1", 00:21:16.269 "superblock": true, 00:21:16.269 "num_base_bdevs": 2, 00:21:16.269 "num_base_bdevs_discovered": 2, 00:21:16.269 "num_base_bdevs_operational": 2, 00:21:16.269 "process": { 00:21:16.269 "type": "rebuild", 00:21:16.269 "target": "spare", 00:21:16.269 "progress": { 00:21:16.269 "blocks": 2560, 00:21:16.269 "percent": 32 00:21:16.269 } 00:21:16.269 }, 00:21:16.269 "base_bdevs_list": [ 00:21:16.269 { 00:21:16.269 "name": "spare", 00:21:16.269 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:16.269 "is_configured": true, 00:21:16.269 "data_offset": 256, 00:21:16.269 "data_size": 7936 00:21:16.269 }, 00:21:16.269 { 00:21:16.269 "name": "BaseBdev2", 00:21:16.269 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:16.269 "is_configured": true, 00:21:16.269 "data_offset": 256, 00:21:16.269 "data_size": 7936 00:21:16.269 } 00:21:16.269 ] 00:21:16.269 }' 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.269 [2024-10-01 13:55:26.151996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:16.269 [2024-10-01 13:55:26.222114] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:16.269 [2024-10-01 13:55:26.222458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.269 [2024-10-01 13:55:26.222563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:16.269 [2024-10-01 13:55:26.222608] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.269 "name": "raid_bdev1", 00:21:16.269 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:16.269 "strip_size_kb": 0, 00:21:16.269 "state": "online", 00:21:16.269 "raid_level": "raid1", 00:21:16.269 "superblock": true, 00:21:16.269 "num_base_bdevs": 2, 00:21:16.269 "num_base_bdevs_discovered": 1, 00:21:16.269 "num_base_bdevs_operational": 1, 00:21:16.269 "base_bdevs_list": [ 00:21:16.269 { 00:21:16.269 "name": null, 00:21:16.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.269 "is_configured": false, 00:21:16.269 "data_offset": 0, 00:21:16.269 "data_size": 7936 00:21:16.269 }, 00:21:16.269 { 00:21:16.269 "name": "BaseBdev2", 00:21:16.269 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:16.269 "is_configured": true, 00:21:16.269 "data_offset": 256, 00:21:16.269 "data_size": 7936 00:21:16.269 } 00:21:16.269 ] 00:21:16.269 }' 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.269 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.528 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:16.528 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.528 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:16.528 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:16.529 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.529 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.529 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.529 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.529 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.529 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.788 "name": "raid_bdev1", 00:21:16.788 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:16.788 "strip_size_kb": 0, 00:21:16.788 "state": "online", 00:21:16.788 "raid_level": "raid1", 00:21:16.788 "superblock": true, 00:21:16.788 "num_base_bdevs": 2, 00:21:16.788 "num_base_bdevs_discovered": 1, 00:21:16.788 "num_base_bdevs_operational": 1, 00:21:16.788 "base_bdevs_list": [ 00:21:16.788 { 00:21:16.788 "name": null, 00:21:16.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.788 "is_configured": false, 00:21:16.788 "data_offset": 0, 00:21:16.788 "data_size": 7936 00:21:16.788 }, 00:21:16.788 { 00:21:16.788 "name": "BaseBdev2", 00:21:16.788 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:16.788 "is_configured": true, 00:21:16.788 "data_offset": 256, 00:21:16.788 "data_size": 7936 00:21:16.788 } 00:21:16.788 ] 00:21:16.788 }' 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.788 [2024-10-01 13:55:26.792541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.788 [2024-10-01 13:55:26.809143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.788 13:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:16.788 [2024-10-01 13:55:26.811470] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.722 "name": "raid_bdev1", 00:21:17.722 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:17.722 "strip_size_kb": 0, 00:21:17.722 "state": "online", 00:21:17.722 "raid_level": "raid1", 00:21:17.722 "superblock": true, 00:21:17.722 "num_base_bdevs": 2, 00:21:17.722 "num_base_bdevs_discovered": 2, 00:21:17.722 "num_base_bdevs_operational": 2, 00:21:17.722 "process": { 00:21:17.722 "type": "rebuild", 00:21:17.722 "target": "spare", 00:21:17.722 "progress": { 00:21:17.722 "blocks": 2560, 00:21:17.722 "percent": 32 00:21:17.722 } 00:21:17.722 }, 00:21:17.722 "base_bdevs_list": [ 00:21:17.722 { 00:21:17.722 "name": "spare", 00:21:17.722 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:17.722 "is_configured": true, 00:21:17.722 "data_offset": 256, 00:21:17.722 "data_size": 7936 00:21:17.722 }, 00:21:17.722 { 00:21:17.722 "name": "BaseBdev2", 00:21:17.722 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:17.722 "is_configured": true, 00:21:17.722 "data_offset": 256, 00:21:17.722 "data_size": 7936 00:21:17.722 } 00:21:17.722 ] 00:21:17.722 }' 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.722 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.980 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.980 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:17.980 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:17.980 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:17.980 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:17.980 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:17.980 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=762 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.981 "name": "raid_bdev1", 00:21:17.981 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:17.981 "strip_size_kb": 0, 00:21:17.981 "state": "online", 00:21:17.981 "raid_level": "raid1", 00:21:17.981 "superblock": true, 00:21:17.981 "num_base_bdevs": 2, 00:21:17.981 "num_base_bdevs_discovered": 2, 00:21:17.981 "num_base_bdevs_operational": 2, 00:21:17.981 "process": { 00:21:17.981 "type": "rebuild", 00:21:17.981 "target": "spare", 00:21:17.981 "progress": { 00:21:17.981 "blocks": 2816, 00:21:17.981 "percent": 35 00:21:17.981 } 00:21:17.981 }, 00:21:17.981 "base_bdevs_list": [ 00:21:17.981 { 00:21:17.981 "name": "spare", 00:21:17.981 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:17.981 "is_configured": true, 00:21:17.981 "data_offset": 256, 00:21:17.981 "data_size": 7936 00:21:17.981 }, 00:21:17.981 { 00:21:17.981 "name": "BaseBdev2", 00:21:17.981 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:17.981 "is_configured": true, 00:21:17.981 "data_offset": 256, 00:21:17.981 "data_size": 7936 00:21:17.981 } 00:21:17.981 ] 00:21:17.981 }' 00:21:17.981 13:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.981 13:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.981 13:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.981 13:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.981 13:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.918 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.176 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.176 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.176 "name": "raid_bdev1", 00:21:19.176 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:19.176 "strip_size_kb": 0, 00:21:19.176 "state": "online", 00:21:19.176 "raid_level": "raid1", 00:21:19.176 "superblock": true, 00:21:19.176 "num_base_bdevs": 2, 00:21:19.176 "num_base_bdevs_discovered": 2, 00:21:19.176 "num_base_bdevs_operational": 2, 00:21:19.176 "process": { 00:21:19.176 "type": "rebuild", 00:21:19.176 "target": "spare", 00:21:19.176 "progress": { 00:21:19.176 "blocks": 5632, 00:21:19.176 "percent": 70 00:21:19.176 } 00:21:19.176 }, 00:21:19.176 "base_bdevs_list": [ 00:21:19.176 { 00:21:19.176 "name": "spare", 00:21:19.176 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:19.176 "is_configured": true, 00:21:19.176 "data_offset": 256, 00:21:19.176 "data_size": 7936 00:21:19.176 }, 00:21:19.176 { 00:21:19.176 "name": "BaseBdev2", 00:21:19.176 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:19.176 "is_configured": true, 00:21:19.176 "data_offset": 256, 00:21:19.176 "data_size": 7936 00:21:19.176 } 00:21:19.176 ] 00:21:19.176 }' 00:21:19.176 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.176 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.176 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.176 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.176 13:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.742 [2024-10-01 13:55:29.930076] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:20.001 [2024-10-01 13:55:29.931147] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:20.001 [2024-10-01 13:55:29.931298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.261 "name": "raid_bdev1", 00:21:20.261 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:20.261 "strip_size_kb": 0, 00:21:20.261 "state": "online", 00:21:20.261 "raid_level": "raid1", 00:21:20.261 "superblock": true, 00:21:20.261 "num_base_bdevs": 2, 00:21:20.261 "num_base_bdevs_discovered": 2, 00:21:20.261 "num_base_bdevs_operational": 2, 00:21:20.261 "base_bdevs_list": [ 00:21:20.261 { 00:21:20.261 "name": "spare", 00:21:20.261 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:20.261 "is_configured": true, 00:21:20.261 "data_offset": 256, 00:21:20.261 "data_size": 7936 00:21:20.261 }, 00:21:20.261 { 00:21:20.261 "name": "BaseBdev2", 00:21:20.261 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:20.261 "is_configured": true, 00:21:20.261 "data_offset": 256, 00:21:20.261 "data_size": 7936 00:21:20.261 } 00:21:20.261 ] 00:21:20.261 }' 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.261 "name": "raid_bdev1", 00:21:20.261 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:20.261 "strip_size_kb": 0, 00:21:20.261 "state": "online", 00:21:20.261 "raid_level": "raid1", 00:21:20.261 "superblock": true, 00:21:20.261 "num_base_bdevs": 2, 00:21:20.261 "num_base_bdevs_discovered": 2, 00:21:20.261 "num_base_bdevs_operational": 2, 00:21:20.261 "base_bdevs_list": [ 00:21:20.261 { 00:21:20.261 "name": "spare", 00:21:20.261 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:20.261 "is_configured": true, 00:21:20.261 "data_offset": 256, 00:21:20.261 "data_size": 7936 00:21:20.261 }, 00:21:20.261 { 00:21:20.261 "name": "BaseBdev2", 00:21:20.261 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:20.261 "is_configured": true, 00:21:20.261 "data_offset": 256, 00:21:20.261 "data_size": 7936 00:21:20.261 } 00:21:20.261 ] 00:21:20.261 }' 00:21:20.261 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.569 "name": "raid_bdev1", 00:21:20.569 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:20.569 "strip_size_kb": 0, 00:21:20.569 "state": "online", 00:21:20.569 "raid_level": "raid1", 00:21:20.569 "superblock": true, 00:21:20.569 "num_base_bdevs": 2, 00:21:20.569 "num_base_bdevs_discovered": 2, 00:21:20.569 "num_base_bdevs_operational": 2, 00:21:20.569 "base_bdevs_list": [ 00:21:20.569 { 00:21:20.569 "name": "spare", 00:21:20.569 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:20.569 "is_configured": true, 00:21:20.569 "data_offset": 256, 00:21:20.569 "data_size": 7936 00:21:20.569 }, 00:21:20.569 { 00:21:20.569 "name": "BaseBdev2", 00:21:20.569 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:20.569 "is_configured": true, 00:21:20.569 "data_offset": 256, 00:21:20.569 "data_size": 7936 00:21:20.569 } 00:21:20.569 ] 00:21:20.569 }' 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.569 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 [2024-10-01 13:55:30.952287] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:20.828 [2024-10-01 13:55:30.952323] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:20.828 [2024-10-01 13:55:30.952431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.828 [2024-10-01 13:55:30.952506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.828 [2024-10-01 13:55:30.952519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 13:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.087 [2024-10-01 13:55:31.024171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:21.087 [2024-10-01 13:55:31.024240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.087 [2024-10-01 13:55:31.024266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:21.087 [2024-10-01 13:55:31.024279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.087 [2024-10-01 13:55:31.026710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.087 [2024-10-01 13:55:31.026752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:21.087 [2024-10-01 13:55:31.026822] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:21.087 [2024-10-01 13:55:31.026889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:21.087 [2024-10-01 13:55:31.026998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.087 spare 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.087 [2024-10-01 13:55:31.126933] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:21.087 [2024-10-01 13:55:31.127011] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:21.087 [2024-10-01 13:55:31.127152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:21.087 [2024-10-01 13:55:31.127305] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:21.087 [2024-10-01 13:55:31.127314] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:21.087 [2024-10-01 13:55:31.127461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.087 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.088 "name": "raid_bdev1", 00:21:21.088 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:21.088 "strip_size_kb": 0, 00:21:21.088 "state": "online", 00:21:21.088 "raid_level": "raid1", 00:21:21.088 "superblock": true, 00:21:21.088 "num_base_bdevs": 2, 00:21:21.088 "num_base_bdevs_discovered": 2, 00:21:21.088 "num_base_bdevs_operational": 2, 00:21:21.088 "base_bdevs_list": [ 00:21:21.088 { 00:21:21.088 "name": "spare", 00:21:21.088 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:21.088 "is_configured": true, 00:21:21.088 "data_offset": 256, 00:21:21.088 "data_size": 7936 00:21:21.088 }, 00:21:21.088 { 00:21:21.088 "name": "BaseBdev2", 00:21:21.088 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:21.088 "is_configured": true, 00:21:21.088 "data_offset": 256, 00:21:21.088 "data_size": 7936 00:21:21.088 } 00:21:21.088 ] 00:21:21.088 }' 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.088 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.655 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.655 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.655 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:21.655 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:21.655 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.656 "name": "raid_bdev1", 00:21:21.656 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:21.656 "strip_size_kb": 0, 00:21:21.656 "state": "online", 00:21:21.656 "raid_level": "raid1", 00:21:21.656 "superblock": true, 00:21:21.656 "num_base_bdevs": 2, 00:21:21.656 "num_base_bdevs_discovered": 2, 00:21:21.656 "num_base_bdevs_operational": 2, 00:21:21.656 "base_bdevs_list": [ 00:21:21.656 { 00:21:21.656 "name": "spare", 00:21:21.656 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:21.656 "is_configured": true, 00:21:21.656 "data_offset": 256, 00:21:21.656 "data_size": 7936 00:21:21.656 }, 00:21:21.656 { 00:21:21.656 "name": "BaseBdev2", 00:21:21.656 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:21.656 "is_configured": true, 00:21:21.656 "data_offset": 256, 00:21:21.656 "data_size": 7936 00:21:21.656 } 00:21:21.656 ] 00:21:21.656 }' 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.656 [2024-10-01 13:55:31.719709] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.656 "name": "raid_bdev1", 00:21:21.656 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:21.656 "strip_size_kb": 0, 00:21:21.656 "state": "online", 00:21:21.656 "raid_level": "raid1", 00:21:21.656 "superblock": true, 00:21:21.656 "num_base_bdevs": 2, 00:21:21.656 "num_base_bdevs_discovered": 1, 00:21:21.656 "num_base_bdevs_operational": 1, 00:21:21.656 "base_bdevs_list": [ 00:21:21.656 { 00:21:21.656 "name": null, 00:21:21.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.656 "is_configured": false, 00:21:21.656 "data_offset": 0, 00:21:21.656 "data_size": 7936 00:21:21.656 }, 00:21:21.656 { 00:21:21.656 "name": "BaseBdev2", 00:21:21.656 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:21.656 "is_configured": true, 00:21:21.656 "data_offset": 256, 00:21:21.656 "data_size": 7936 00:21:21.656 } 00:21:21.656 ] 00:21:21.656 }' 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.656 13:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.223 13:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.223 13:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.223 13:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.223 [2024-10-01 13:55:32.167737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.223 [2024-10-01 13:55:32.168023] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:22.223 [2024-10-01 13:55:32.168049] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:22.223 [2024-10-01 13:55:32.168095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.223 [2024-10-01 13:55:32.184064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:22.223 13:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.223 13:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:22.223 [2024-10-01 13:55:32.186484] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.159 "name": "raid_bdev1", 00:21:23.159 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:23.159 "strip_size_kb": 0, 00:21:23.159 "state": "online", 00:21:23.159 "raid_level": "raid1", 00:21:23.159 "superblock": true, 00:21:23.159 "num_base_bdevs": 2, 00:21:23.159 "num_base_bdevs_discovered": 2, 00:21:23.159 "num_base_bdevs_operational": 2, 00:21:23.159 "process": { 00:21:23.159 "type": "rebuild", 00:21:23.159 "target": "spare", 00:21:23.159 "progress": { 00:21:23.159 "blocks": 2560, 00:21:23.159 "percent": 32 00:21:23.159 } 00:21:23.159 }, 00:21:23.159 "base_bdevs_list": [ 00:21:23.159 { 00:21:23.159 "name": "spare", 00:21:23.159 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:23.159 "is_configured": true, 00:21:23.159 "data_offset": 256, 00:21:23.159 "data_size": 7936 00:21:23.159 }, 00:21:23.159 { 00:21:23.159 "name": "BaseBdev2", 00:21:23.159 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:23.159 "is_configured": true, 00:21:23.159 "data_offset": 256, 00:21:23.159 "data_size": 7936 00:21:23.159 } 00:21:23.159 ] 00:21:23.159 }' 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.159 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:23.160 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.160 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.160 [2024-10-01 13:55:33.346237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.420 [2024-10-01 13:55:33.392506] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:23.420 [2024-10-01 13:55:33.392596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.420 [2024-10-01 13:55:33.392614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.420 [2024-10-01 13:55:33.392641] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.420 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.421 "name": "raid_bdev1", 00:21:23.421 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:23.421 "strip_size_kb": 0, 00:21:23.421 "state": "online", 00:21:23.421 "raid_level": "raid1", 00:21:23.421 "superblock": true, 00:21:23.421 "num_base_bdevs": 2, 00:21:23.421 "num_base_bdevs_discovered": 1, 00:21:23.421 "num_base_bdevs_operational": 1, 00:21:23.421 "base_bdevs_list": [ 00:21:23.421 { 00:21:23.421 "name": null, 00:21:23.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.421 "is_configured": false, 00:21:23.421 "data_offset": 0, 00:21:23.421 "data_size": 7936 00:21:23.421 }, 00:21:23.421 { 00:21:23.421 "name": "BaseBdev2", 00:21:23.421 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:23.421 "is_configured": true, 00:21:23.421 "data_offset": 256, 00:21:23.421 "data_size": 7936 00:21:23.421 } 00:21:23.421 ] 00:21:23.421 }' 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.421 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.996 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:23.996 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.996 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.996 [2024-10-01 13:55:33.891966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:23.996 [2024-10-01 13:55:33.892177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.996 [2024-10-01 13:55:33.892214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:23.996 [2024-10-01 13:55:33.892229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.996 [2024-10-01 13:55:33.892605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.996 [2024-10-01 13:55:33.892630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:23.996 [2024-10-01 13:55:33.892699] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:23.996 [2024-10-01 13:55:33.892716] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:23.996 [2024-10-01 13:55:33.892735] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:23.996 [2024-10-01 13:55:33.892761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.996 [2024-10-01 13:55:33.909010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:23.996 spare 00:21:23.996 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.996 13:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:23.996 [2024-10-01 13:55:33.911269] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.931 "name": "raid_bdev1", 00:21:24.931 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:24.931 "strip_size_kb": 0, 00:21:24.931 "state": "online", 00:21:24.931 "raid_level": "raid1", 00:21:24.931 "superblock": true, 00:21:24.931 "num_base_bdevs": 2, 00:21:24.931 "num_base_bdevs_discovered": 2, 00:21:24.931 "num_base_bdevs_operational": 2, 00:21:24.931 "process": { 00:21:24.931 "type": "rebuild", 00:21:24.931 "target": "spare", 00:21:24.931 "progress": { 00:21:24.931 "blocks": 2560, 00:21:24.931 "percent": 32 00:21:24.931 } 00:21:24.931 }, 00:21:24.931 "base_bdevs_list": [ 00:21:24.931 { 00:21:24.931 "name": "spare", 00:21:24.931 "uuid": "47a5034a-4fcc-5f1e-99d4-ea8821c9a8d9", 00:21:24.931 "is_configured": true, 00:21:24.931 "data_offset": 256, 00:21:24.931 "data_size": 7936 00:21:24.931 }, 00:21:24.931 { 00:21:24.931 "name": "BaseBdev2", 00:21:24.931 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:24.931 "is_configured": true, 00:21:24.931 "data_offset": 256, 00:21:24.931 "data_size": 7936 00:21:24.931 } 00:21:24.931 ] 00:21:24.931 }' 00:21:24.931 13:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.931 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.931 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.931 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.931 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:24.931 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.931 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.931 [2024-10-01 13:55:35.054706] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.931 [2024-10-01 13:55:35.117958] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:24.931 [2024-10-01 13:55:35.118053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.931 [2024-10-01 13:55:35.118073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.931 [2024-10-01 13:55:35.118083] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.190 "name": "raid_bdev1", 00:21:25.190 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:25.190 "strip_size_kb": 0, 00:21:25.190 "state": "online", 00:21:25.190 "raid_level": "raid1", 00:21:25.190 "superblock": true, 00:21:25.190 "num_base_bdevs": 2, 00:21:25.190 "num_base_bdevs_discovered": 1, 00:21:25.190 "num_base_bdevs_operational": 1, 00:21:25.190 "base_bdevs_list": [ 00:21:25.190 { 00:21:25.190 "name": null, 00:21:25.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.190 "is_configured": false, 00:21:25.190 "data_offset": 0, 00:21:25.190 "data_size": 7936 00:21:25.190 }, 00:21:25.190 { 00:21:25.190 "name": "BaseBdev2", 00:21:25.190 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:25.190 "is_configured": true, 00:21:25.190 "data_offset": 256, 00:21:25.190 "data_size": 7936 00:21:25.190 } 00:21:25.190 ] 00:21:25.190 }' 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.190 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.448 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.706 "name": "raid_bdev1", 00:21:25.706 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:25.706 "strip_size_kb": 0, 00:21:25.706 "state": "online", 00:21:25.706 "raid_level": "raid1", 00:21:25.706 "superblock": true, 00:21:25.706 "num_base_bdevs": 2, 00:21:25.706 "num_base_bdevs_discovered": 1, 00:21:25.706 "num_base_bdevs_operational": 1, 00:21:25.706 "base_bdevs_list": [ 00:21:25.706 { 00:21:25.706 "name": null, 00:21:25.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.706 "is_configured": false, 00:21:25.706 "data_offset": 0, 00:21:25.706 "data_size": 7936 00:21:25.706 }, 00:21:25.706 { 00:21:25.706 "name": "BaseBdev2", 00:21:25.706 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:25.706 "is_configured": true, 00:21:25.706 "data_offset": 256, 00:21:25.706 "data_size": 7936 00:21:25.706 } 00:21:25.706 ] 00:21:25.706 }' 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.706 [2024-10-01 13:55:35.764031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:25.706 [2024-10-01 13:55:35.764106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.706 [2024-10-01 13:55:35.764141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:25.706 [2024-10-01 13:55:35.764157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.706 [2024-10-01 13:55:35.764362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.706 [2024-10-01 13:55:35.764379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:25.706 [2024-10-01 13:55:35.764465] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:25.706 [2024-10-01 13:55:35.764481] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:25.706 [2024-10-01 13:55:35.764494] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:25.706 [2024-10-01 13:55:35.764506] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:25.706 BaseBdev1 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.706 13:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.638 "name": "raid_bdev1", 00:21:26.638 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:26.638 "strip_size_kb": 0, 00:21:26.638 "state": "online", 00:21:26.638 "raid_level": "raid1", 00:21:26.638 "superblock": true, 00:21:26.638 "num_base_bdevs": 2, 00:21:26.638 "num_base_bdevs_discovered": 1, 00:21:26.638 "num_base_bdevs_operational": 1, 00:21:26.638 "base_bdevs_list": [ 00:21:26.638 { 00:21:26.638 "name": null, 00:21:26.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.638 "is_configured": false, 00:21:26.638 "data_offset": 0, 00:21:26.638 "data_size": 7936 00:21:26.638 }, 00:21:26.638 { 00:21:26.638 "name": "BaseBdev2", 00:21:26.638 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:26.638 "is_configured": true, 00:21:26.638 "data_offset": 256, 00:21:26.638 "data_size": 7936 00:21:26.638 } 00:21:26.638 ] 00:21:26.638 }' 00:21:26.638 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.639 13:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.205 "name": "raid_bdev1", 00:21:27.205 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:27.205 "strip_size_kb": 0, 00:21:27.205 "state": "online", 00:21:27.205 "raid_level": "raid1", 00:21:27.205 "superblock": true, 00:21:27.205 "num_base_bdevs": 2, 00:21:27.205 "num_base_bdevs_discovered": 1, 00:21:27.205 "num_base_bdevs_operational": 1, 00:21:27.205 "base_bdevs_list": [ 00:21:27.205 { 00:21:27.205 "name": null, 00:21:27.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.205 "is_configured": false, 00:21:27.205 "data_offset": 0, 00:21:27.205 "data_size": 7936 00:21:27.205 }, 00:21:27.205 { 00:21:27.205 "name": "BaseBdev2", 00:21:27.205 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:27.205 "is_configured": true, 00:21:27.205 "data_offset": 256, 00:21:27.205 "data_size": 7936 00:21:27.205 } 00:21:27.205 ] 00:21:27.205 }' 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.205 [2024-10-01 13:55:37.339692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.205 [2024-10-01 13:55:37.339856] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:27.205 [2024-10-01 13:55:37.339885] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:27.205 request: 00:21:27.205 { 00:21:27.205 "base_bdev": "BaseBdev1", 00:21:27.205 "raid_bdev": "raid_bdev1", 00:21:27.205 "method": "bdev_raid_add_base_bdev", 00:21:27.205 "req_id": 1 00:21:27.205 } 00:21:27.205 Got JSON-RPC error response 00:21:27.205 response: 00:21:27.205 { 00:21:27.205 "code": -22, 00:21:27.205 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:27.205 } 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:27.205 13:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.579 "name": "raid_bdev1", 00:21:28.579 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:28.579 "strip_size_kb": 0, 00:21:28.579 "state": "online", 00:21:28.579 "raid_level": "raid1", 00:21:28.579 "superblock": true, 00:21:28.579 "num_base_bdevs": 2, 00:21:28.579 "num_base_bdevs_discovered": 1, 00:21:28.579 "num_base_bdevs_operational": 1, 00:21:28.579 "base_bdevs_list": [ 00:21:28.579 { 00:21:28.579 "name": null, 00:21:28.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.579 "is_configured": false, 00:21:28.579 "data_offset": 0, 00:21:28.579 "data_size": 7936 00:21:28.579 }, 00:21:28.579 { 00:21:28.579 "name": "BaseBdev2", 00:21:28.579 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:28.579 "is_configured": true, 00:21:28.579 "data_offset": 256, 00:21:28.579 "data_size": 7936 00:21:28.579 } 00:21:28.579 ] 00:21:28.579 }' 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.579 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.837 "name": "raid_bdev1", 00:21:28.837 "uuid": "a542709f-c8b9-4c33-ac8c-89560ab93311", 00:21:28.837 "strip_size_kb": 0, 00:21:28.837 "state": "online", 00:21:28.837 "raid_level": "raid1", 00:21:28.837 "superblock": true, 00:21:28.837 "num_base_bdevs": 2, 00:21:28.837 "num_base_bdevs_discovered": 1, 00:21:28.837 "num_base_bdevs_operational": 1, 00:21:28.837 "base_bdevs_list": [ 00:21:28.837 { 00:21:28.837 "name": null, 00:21:28.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.837 "is_configured": false, 00:21:28.837 "data_offset": 0, 00:21:28.837 "data_size": 7936 00:21:28.837 }, 00:21:28.837 { 00:21:28.837 "name": "BaseBdev2", 00:21:28.837 "uuid": "bfc5b2b7-80cb-511d-a965-a3560f589237", 00:21:28.837 "is_configured": true, 00:21:28.837 "data_offset": 256, 00:21:28.837 "data_size": 7936 00:21:28.837 } 00:21:28.837 ] 00:21:28.837 }' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89122 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89122 ']' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89122 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89122 00:21:28.837 killing process with pid 89122 00:21:28.837 Received shutdown signal, test time was about 60.000000 seconds 00:21:28.837 00:21:28.837 Latency(us) 00:21:28.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.837 =================================================================================================================== 00:21:28.837 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89122' 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89122 00:21:28.837 [2024-10-01 13:55:38.968495] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.837 13:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89122 00:21:28.837 [2024-10-01 13:55:38.968676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.837 [2024-10-01 13:55:38.968725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.838 [2024-10-01 13:55:38.968743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:29.095 [2024-10-01 13:55:39.274523] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.472 13:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:30.472 00:21:30.472 real 0m17.703s 00:21:30.472 user 0m23.053s 00:21:30.472 sys 0m1.885s 00:21:30.472 13:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.472 ************************************ 00:21:30.472 13:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.472 END TEST raid_rebuild_test_sb_md_interleaved 00:21:30.472 ************************************ 00:21:30.472 13:55:40 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:30.472 13:55:40 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:30.472 13:55:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89122 ']' 00:21:30.472 13:55:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89122 00:21:30.472 13:55:40 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:30.472 ************************************ 00:21:30.472 END TEST bdev_raid 00:21:30.472 ************************************ 00:21:30.472 00:21:30.472 real 12m25.299s 00:21:30.472 user 16m31.886s 00:21:30.472 sys 2m11.569s 00:21:30.472 13:55:40 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.472 13:55:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.730 13:55:40 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:30.730 13:55:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:30.730 13:55:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:30.730 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:21:30.730 ************************************ 00:21:30.730 START TEST spdkcli_raid 00:21:30.730 ************************************ 00:21:30.730 13:55:40 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:30.730 * Looking for test storage... 00:21:30.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:30.730 13:55:40 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:30.730 13:55:40 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:21:30.730 13:55:40 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:30.989 13:55:40 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.990 13:55:40 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:30.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.990 --rc genhtml_branch_coverage=1 00:21:30.990 --rc genhtml_function_coverage=1 00:21:30.990 --rc genhtml_legend=1 00:21:30.990 --rc geninfo_all_blocks=1 00:21:30.990 --rc geninfo_unexecuted_blocks=1 00:21:30.990 00:21:30.990 ' 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:30.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.990 --rc genhtml_branch_coverage=1 00:21:30.990 --rc genhtml_function_coverage=1 00:21:30.990 --rc genhtml_legend=1 00:21:30.990 --rc geninfo_all_blocks=1 00:21:30.990 --rc geninfo_unexecuted_blocks=1 00:21:30.990 00:21:30.990 ' 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:30.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.990 --rc genhtml_branch_coverage=1 00:21:30.990 --rc genhtml_function_coverage=1 00:21:30.990 --rc genhtml_legend=1 00:21:30.990 --rc geninfo_all_blocks=1 00:21:30.990 --rc geninfo_unexecuted_blocks=1 00:21:30.990 00:21:30.990 ' 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:30.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.990 --rc genhtml_branch_coverage=1 00:21:30.990 --rc genhtml_function_coverage=1 00:21:30.990 --rc genhtml_legend=1 00:21:30.990 --rc geninfo_all_blocks=1 00:21:30.990 --rc geninfo_unexecuted_blocks=1 00:21:30.990 00:21:30.990 ' 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:30.990 13:55:40 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89806 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89806 00:21:30.990 13:55:40 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 89806 ']' 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.990 13:55:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.990 [2024-10-01 13:55:41.083803] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:30.990 [2024-10-01 13:55:41.084137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89806 ] 00:21:31.249 [2024-10-01 13:55:41.256979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:31.509 [2024-10-01 13:55:41.483410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.509 [2024-10-01 13:55:41.483468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.443 13:55:42 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.443 13:55:42 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:21:32.443 13:55:42 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:32.443 13:55:42 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.443 13:55:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:32.443 13:55:42 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:32.443 13:55:42 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.443 13:55:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:32.443 13:55:42 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:32.443 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:32.443 ' 00:21:33.820 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:33.820 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:34.079 13:55:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:34.079 13:55:44 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.079 13:55:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.079 13:55:44 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:34.079 13:55:44 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.079 13:55:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.079 13:55:44 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:34.079 ' 00:21:35.018 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:35.277 13:55:45 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:35.277 13:55:45 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.277 13:55:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.277 13:55:45 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:35.277 13:55:45 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.277 13:55:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.277 13:55:45 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:35.277 13:55:45 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:35.845 13:55:45 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:35.845 13:55:45 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:35.845 13:55:45 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:35.845 13:55:45 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.845 13:55:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.845 13:55:45 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:35.845 13:55:45 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.845 13:55:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.845 13:55:45 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:35.845 ' 00:21:36.780 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:37.039 13:55:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:37.039 13:55:47 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:37.039 13:55:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.039 13:55:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:37.039 13:55:47 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:37.039 13:55:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.039 13:55:47 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:37.039 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:37.039 ' 00:21:38.416 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:38.416 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:38.675 13:55:48 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.675 13:55:48 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89806 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89806 ']' 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89806 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89806 00:21:38.675 killing process with pid 89806 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89806' 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 89806 00:21:38.675 13:55:48 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 89806 00:21:41.212 13:55:51 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:41.212 13:55:51 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89806 ']' 00:21:41.212 13:55:51 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89806 00:21:41.212 13:55:51 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89806 ']' 00:21:41.212 13:55:51 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89806 00:21:41.212 Process with pid 89806 is not found 00:21:41.212 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89806) - No such process 00:21:41.212 13:55:51 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 89806 is not found' 00:21:41.212 13:55:51 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:41.212 13:55:51 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:41.212 13:55:51 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:41.212 13:55:51 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:41.212 00:21:41.212 real 0m10.654s 00:21:41.212 user 0m21.614s 00:21:41.212 sys 0m1.188s 00:21:41.212 13:55:51 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:41.212 13:55:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.212 ************************************ 00:21:41.212 END TEST spdkcli_raid 00:21:41.212 ************************************ 00:21:41.470 13:55:51 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:41.470 13:55:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:41.470 13:55:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:41.470 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:21:41.470 ************************************ 00:21:41.470 START TEST blockdev_raid5f 00:21:41.470 ************************************ 00:21:41.470 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:41.470 * Looking for test storage... 00:21:41.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:41.470 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:41.470 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:21:41.470 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:41.470 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.470 13:55:51 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.731 13:55:51 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:41.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.731 --rc genhtml_branch_coverage=1 00:21:41.731 --rc genhtml_function_coverage=1 00:21:41.731 --rc genhtml_legend=1 00:21:41.731 --rc geninfo_all_blocks=1 00:21:41.731 --rc geninfo_unexecuted_blocks=1 00:21:41.731 00:21:41.731 ' 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90086 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:41.731 13:55:51 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90086 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90086 ']' 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.731 13:55:51 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.732 13:55:51 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.732 13:55:51 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.732 13:55:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:41.732 [2024-10-01 13:55:51.802877] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:41.732 [2024-10-01 13:55:51.803774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90086 ] 00:21:41.991 [2024-10-01 13:55:51.975419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.251 [2024-10-01 13:55:52.198075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.189 13:55:53 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.189 13:55:53 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:43.190 Malloc0 00:21:43.190 Malloc1 00:21:43.190 Malloc2 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:43.190 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.190 13:55:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:43.449 13:55:53 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.449 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:43.449 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6104fe6f-12db-4acc-a5d6-e4995ab5618d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6104fe6f-12db-4acc-a5d6-e4995ab5618d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6104fe6f-12db-4acc-a5d6-e4995ab5618d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "951074a7-d486-48e7-a4cf-3bee01d307df",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "488c2ad9-27ec-492a-b8b6-4a3c4fe063f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "12f8d571-d31e-4327-b810-f7200a137f77",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:43.449 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:43.450 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:43.450 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:21:43.450 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:43.450 13:55:53 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90086 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90086 ']' 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90086 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90086 00:21:43.450 killing process with pid 90086 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90086' 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90086 00:21:43.450 13:55:53 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90086 00:21:46.766 13:55:56 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:46.766 13:55:56 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:46.767 13:55:56 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:46.767 13:55:56 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.767 13:55:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:46.767 ************************************ 00:21:46.767 START TEST bdev_hello_world 00:21:46.767 ************************************ 00:21:46.767 13:55:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:46.767 [2024-10-01 13:55:56.523930] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:46.767 [2024-10-01 13:55:56.524059] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90159 ] 00:21:46.767 [2024-10-01 13:55:56.695703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.767 [2024-10-01 13:55:56.912487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.335 [2024-10-01 13:55:57.456002] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:47.335 [2024-10-01 13:55:57.456258] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:47.335 [2024-10-01 13:55:57.456289] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:47.335 [2024-10-01 13:55:57.456822] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:47.335 [2024-10-01 13:55:57.457012] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:47.335 [2024-10-01 13:55:57.457039] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:47.335 [2024-10-01 13:55:57.457095] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:47.335 00:21:47.335 [2024-10-01 13:55:57.457118] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:49.237 00:21:49.237 real 0m2.670s 00:21:49.237 user 0m2.259s 00:21:49.237 sys 0m0.286s 00:21:49.237 13:55:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:49.237 13:55:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:49.237 ************************************ 00:21:49.237 END TEST bdev_hello_world 00:21:49.237 ************************************ 00:21:49.237 13:55:59 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:49.237 13:55:59 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:49.237 13:55:59 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:49.237 13:55:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:49.237 ************************************ 00:21:49.237 START TEST bdev_bounds 00:21:49.237 ************************************ 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90201 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:49.237 Process bdevio pid: 90201 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90201' 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90201 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90201 ']' 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:49.237 13:55:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:49.237 [2024-10-01 13:55:59.295926] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:49.237 [2024-10-01 13:55:59.296953] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90201 ] 00:21:49.496 [2024-10-01 13:55:59.491897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:49.754 [2024-10-01 13:55:59.715625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.754 [2024-10-01 13:55:59.715673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.754 [2024-10-01 13:55:59.715717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.321 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.321 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:21:50.321 13:56:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:50.321 I/O targets: 00:21:50.321 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:50.321 00:21:50.321 00:21:50.321 CUnit - A unit testing framework for C - Version 2.1-3 00:21:50.321 http://cunit.sourceforge.net/ 00:21:50.321 00:21:50.321 00:21:50.321 Suite: bdevio tests on: raid5f 00:21:50.321 Test: blockdev write read block ...passed 00:21:50.321 Test: blockdev write zeroes read block ...passed 00:21:50.321 Test: blockdev write zeroes read no split ...passed 00:21:50.579 Test: blockdev write zeroes read split ...passed 00:21:50.580 Test: blockdev write zeroes read split partial ...passed 00:21:50.580 Test: blockdev reset ...passed 00:21:50.580 Test: blockdev write read 8 blocks ...passed 00:21:50.580 Test: blockdev write read size > 128k ...passed 00:21:50.580 Test: blockdev write read invalid size ...passed 00:21:50.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:50.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:50.580 Test: blockdev write read max offset ...passed 00:21:50.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:50.580 Test: blockdev writev readv 8 blocks ...passed 00:21:50.580 Test: blockdev writev readv 30 x 1block ...passed 00:21:50.580 Test: blockdev writev readv block ...passed 00:21:50.580 Test: blockdev writev readv size > 128k ...passed 00:21:50.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:50.580 Test: blockdev comparev and writev ...passed 00:21:50.580 Test: blockdev nvme passthru rw ...passed 00:21:50.580 Test: blockdev nvme passthru vendor specific ...passed 00:21:50.580 Test: blockdev nvme admin passthru ...passed 00:21:50.580 Test: blockdev copy ...passed 00:21:50.580 00:21:50.580 Run Summary: Type Total Ran Passed Failed Inactive 00:21:50.580 suites 1 1 n/a 0 0 00:21:50.580 tests 23 23 23 0 0 00:21:50.580 asserts 130 130 130 0 n/a 00:21:50.580 00:21:50.580 Elapsed time = 0.637 seconds 00:21:50.580 0 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90201 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90201 ']' 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90201 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90201 00:21:50.580 killing process with pid 90201 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90201' 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90201 00:21:50.580 13:56:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90201 00:21:52.484 ************************************ 00:21:52.484 END TEST bdev_bounds 00:21:52.484 ************************************ 00:21:52.484 13:56:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:52.484 00:21:52.484 real 0m3.189s 00:21:52.484 user 0m7.501s 00:21:52.484 sys 0m0.472s 00:21:52.484 13:56:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.484 13:56:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:52.484 13:56:02 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:52.484 13:56:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:52.484 13:56:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.484 13:56:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:52.484 ************************************ 00:21:52.484 START TEST bdev_nbd 00:21:52.484 ************************************ 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90272 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90272 /var/tmp/spdk-nbd.sock 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90272 ']' 00:21:52.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.484 13:56:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:52.484 [2024-10-01 13:56:02.559112] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:52.484 [2024-10-01 13:56:02.559256] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.743 [2024-10-01 13:56:02.735475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.002 [2024-10-01 13:56:02.966828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.568 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.568 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:21:53.568 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:53.568 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:53.568 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:53.568 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:53.568 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:53.569 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:53.827 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:53.827 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:53.827 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:53.827 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:53.827 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:53.827 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:53.827 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:53.828 1+0 records in 00:21:53.828 1+0 records out 00:21:53.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556815 s, 7.4 MB/s 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:53.828 13:56:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:54.086 { 00:21:54.086 "nbd_device": "/dev/nbd0", 00:21:54.086 "bdev_name": "raid5f" 00:21:54.086 } 00:21:54.086 ]' 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:54.086 { 00:21:54.086 "nbd_device": "/dev/nbd0", 00:21:54.086 "bdev_name": "raid5f" 00:21:54.086 } 00:21:54.086 ]' 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:54.086 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.345 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.603 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:54.862 /dev/nbd0 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:54.862 1+0 records in 00:21:54.862 1+0 records out 00:21:54.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280282 s, 14.6 MB/s 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.862 13:56:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:55.121 { 00:21:55.121 "nbd_device": "/dev/nbd0", 00:21:55.121 "bdev_name": "raid5f" 00:21:55.121 } 00:21:55.121 ]' 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:55.121 { 00:21:55.121 "nbd_device": "/dev/nbd0", 00:21:55.121 "bdev_name": "raid5f" 00:21:55.121 } 00:21:55.121 ]' 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:55.121 256+0 records in 00:21:55.121 256+0 records out 00:21:55.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111678 s, 93.9 MB/s 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:55.121 256+0 records in 00:21:55.121 256+0 records out 00:21:55.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0373265 s, 28.1 MB/s 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.121 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:55.379 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:55.379 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:55.379 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:55.379 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.380 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.380 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:55.380 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:55.380 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.380 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:55.380 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:55.380 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:55.638 13:56:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:55.897 malloc_lvol_verify 00:21:55.897 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:56.154 c71e048c-8511-42b9-89d7-5f88f5b8a494 00:21:56.154 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:56.412 208e90f4-25c0-45d8-bca8-a6966199f187 00:21:56.412 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:56.670 /dev/nbd0 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:56.670 mke2fs 1.47.0 (5-Feb-2023) 00:21:56.670 Discarding device blocks: 0/4096 done 00:21:56.670 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:56.670 00:21:56.670 Allocating group tables: 0/1 done 00:21:56.670 Writing inode tables: 0/1 done 00:21:56.670 Creating journal (1024 blocks): done 00:21:56.670 Writing superblocks and filesystem accounting information: 0/1 done 00:21:56.670 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.670 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:56.928 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:56.928 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:56.928 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90272 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90272 ']' 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90272 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90272 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:56.929 killing process with pid 90272 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90272' 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90272 00:21:56.929 13:56:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90272 00:21:58.895 13:56:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:58.895 00:21:58.895 real 0m6.266s 00:21:58.895 user 0m8.241s 00:21:58.895 sys 0m1.551s 00:21:58.895 13:56:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.895 13:56:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:58.895 ************************************ 00:21:58.895 END TEST bdev_nbd 00:21:58.895 ************************************ 00:21:58.895 13:56:08 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:58.895 13:56:08 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:21:58.895 13:56:08 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:21:58.895 13:56:08 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:58.895 13:56:08 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:58.895 13:56:08 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.895 13:56:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:58.895 ************************************ 00:21:58.895 START TEST bdev_fio 00:21:58.895 ************************************ 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:58.895 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:58.895 ************************************ 00:21:58.895 START TEST bdev_fio_rw_verify 00:21:58.895 ************************************ 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:58.895 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:21:58.896 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:58.896 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:58.896 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:21:58.896 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:58.896 13:56:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:59.153 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:59.153 fio-3.35 00:21:59.153 Starting 1 thread 00:22:11.356 00:22:11.356 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90477: Tue Oct 1 13:56:20 2024 00:22:11.356 read: IOPS=9933, BW=38.8MiB/s (40.7MB/s)(388MiB/10001msec) 00:22:11.356 slat (usec): min=18, max=669, avg=23.75, stdev= 6.65 00:22:11.356 clat (usec): min=11, max=886, avg=159.45, stdev=60.08 00:22:11.356 lat (usec): min=33, max=908, avg=183.19, stdev=61.21 00:22:11.356 clat percentiles (usec): 00:22:11.356 | 50.000th=[ 157], 99.000th=[ 285], 99.900th=[ 469], 99.990th=[ 807], 00:22:11.356 | 99.999th=[ 889] 00:22:11.356 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(404MiB/9866msec); 0 zone resets 00:22:11.356 slat (usec): min=8, max=295, avg=20.56, stdev= 6.21 00:22:11.356 clat (usec): min=62, max=1808, avg=366.96, stdev=58.08 00:22:11.356 lat (usec): min=79, max=1830, avg=387.52, stdev=59.45 00:22:11.356 clat percentiles (usec): 00:22:11.356 | 50.000th=[ 367], 99.000th=[ 529], 99.900th=[ 775], 99.990th=[ 1352], 00:22:11.356 | 99.999th=[ 1795] 00:22:11.356 bw ( KiB/s): min=37744, max=44080, per=98.45%, avg=41231.58, stdev=1785.68, samples=19 00:22:11.356 iops : min= 9436, max=11020, avg=10307.89, stdev=446.42, samples=19 00:22:11.356 lat (usec) : 20=0.01%, 50=0.01%, 100=10.29%, 250=36.70%, 500=52.32% 00:22:11.356 lat (usec) : 750=0.62%, 1000=0.05% 00:22:11.356 lat (msec) : 2=0.02% 00:22:11.356 cpu : usr=98.18%, sys=0.75%, ctx=37, majf=0, minf=8428 00:22:11.356 IO depths : 1=7.7%, 2=20.1%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:11.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.356 complete : 0=0.0%, 4=89.9%, 8=10.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.356 issued rwts: total=99340,103296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:11.356 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:11.356 00:22:11.356 Run status group 0 (all jobs): 00:22:11.356 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=388MiB (407MB), run=10001-10001msec 00:22:11.356 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=404MiB (423MB), run=9866-9866msec 00:22:11.618 ----------------------------------------------------- 00:22:11.618 Suppressions used: 00:22:11.618 count bytes template 00:22:11.618 1 7 /usr/src/fio/parse.c 00:22:11.618 848 81408 /usr/src/fio/iolog.c 00:22:11.618 1 8 libtcmalloc_minimal.so 00:22:11.618 1 904 libcrypto.so 00:22:11.618 ----------------------------------------------------- 00:22:11.618 00:22:11.877 00:22:11.877 real 0m12.902s 00:22:11.877 user 0m13.276s 00:22:11.877 sys 0m0.868s 00:22:11.877 ************************************ 00:22:11.877 END TEST bdev_fio_rw_verify 00:22:11.877 ************************************ 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:22:11.877 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6104fe6f-12db-4acc-a5d6-e4995ab5618d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6104fe6f-12db-4acc-a5d6-e4995ab5618d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6104fe6f-12db-4acc-a5d6-e4995ab5618d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "951074a7-d486-48e7-a4cf-3bee01d307df",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "488c2ad9-27ec-492a-b8b6-4a3c4fe063f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "12f8d571-d31e-4327-b810-f7200a137f77",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:11.878 /home/vagrant/spdk_repo/spdk 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:11.878 00:22:11.878 real 0m13.164s 00:22:11.878 user 0m13.389s 00:22:11.878 sys 0m1.001s 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:11.878 ************************************ 00:22:11.878 END TEST bdev_fio 00:22:11.878 ************************************ 00:22:11.878 13:56:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:11.878 13:56:21 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:11.878 13:56:21 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:11.878 13:56:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:22:11.878 13:56:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:11.878 13:56:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.878 ************************************ 00:22:11.878 START TEST bdev_verify 00:22:11.878 ************************************ 00:22:11.878 13:56:22 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:12.137 [2024-10-01 13:56:22.095802] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:12.137 [2024-10-01 13:56:22.095930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90639 ] 00:22:12.137 [2024-10-01 13:56:22.269248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:12.396 [2024-10-01 13:56:22.490260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.396 [2024-10-01 13:56:22.490304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.964 Running I/O for 5 seconds... 00:22:18.083 14000.00 IOPS, 54.69 MiB/s 14684.00 IOPS, 57.36 MiB/s 14801.67 IOPS, 57.82 MiB/s 14517.50 IOPS, 56.71 MiB/s 14737.40 IOPS, 57.57 MiB/s 00:22:18.083 Latency(us) 00:22:18.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.083 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:18.083 Verification LBA range: start 0x0 length 0x2000 00:22:18.083 raid5f : 5.02 7344.49 28.69 0.00 0.00 26143.76 98.70 22003.25 00:22:18.083 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:18.083 Verification LBA range: start 0x2000 length 0x2000 00:22:18.083 raid5f : 5.01 7378.15 28.82 0.00 0.00 26047.43 264.84 20950.46 00:22:18.083 =================================================================================================================== 00:22:18.083 Total : 14722.64 57.51 0.00 0.00 26095.52 98.70 22003.25 00:22:19.459 00:22:19.459 real 0m7.604s 00:22:19.459 user 0m13.784s 00:22:19.459 sys 0m0.310s 00:22:19.459 ************************************ 00:22:19.460 END TEST bdev_verify 00:22:19.460 ************************************ 00:22:19.460 13:56:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:19.460 13:56:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:19.719 13:56:29 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:19.719 13:56:29 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:22:19.719 13:56:29 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.719 13:56:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:19.719 ************************************ 00:22:19.719 START TEST bdev_verify_big_io 00:22:19.719 ************************************ 00:22:19.719 13:56:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:19.719 [2024-10-01 13:56:29.780953] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:19.719 [2024-10-01 13:56:29.781077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90737 ] 00:22:19.978 [2024-10-01 13:56:29.955315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:20.237 [2024-10-01 13:56:30.205952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.237 [2024-10-01 13:56:30.205991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.805 Running I/O for 5 seconds... 00:22:26.018 756.00 IOPS, 47.25 MiB/s 761.00 IOPS, 47.56 MiB/s 845.33 IOPS, 52.83 MiB/s 887.50 IOPS, 55.47 MiB/s 888.00 IOPS, 55.50 MiB/s 00:22:26.018 Latency(us) 00:22:26.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.018 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:26.018 Verification LBA range: start 0x0 length 0x200 00:22:26.018 raid5f : 5.19 452.84 28.30 0.00 0.00 6887480.46 150.52 309940.54 00:22:26.018 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:26.018 Verification LBA range: start 0x200 length 0x200 00:22:26.018 raid5f : 5.12 446.37 27.90 0.00 0.00 7090394.56 185.06 314993.91 00:22:26.018 =================================================================================================================== 00:22:26.018 Total : 899.22 56.20 0.00 0.00 6987580.08 150.52 314993.91 00:22:27.426 00:22:27.426 real 0m7.855s 00:22:27.426 user 0m14.253s 00:22:27.426 sys 0m0.297s 00:22:27.426 13:56:37 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:27.426 13:56:37 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:27.426 ************************************ 00:22:27.426 END TEST bdev_verify_big_io 00:22:27.426 ************************************ 00:22:27.426 13:56:37 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:27.426 13:56:37 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:27.426 13:56:37 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:27.426 13:56:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:27.426 ************************************ 00:22:27.426 START TEST bdev_write_zeroes 00:22:27.426 ************************************ 00:22:27.426 13:56:37 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:27.685 [2024-10-01 13:56:37.710989] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:27.685 [2024-10-01 13:56:37.711127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90841 ] 00:22:27.945 [2024-10-01 13:56:37.883472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.945 [2024-10-01 13:56:38.100263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.514 Running I/O for 1 seconds... 00:22:29.464 26487.00 IOPS, 103.46 MiB/s 00:22:29.464 Latency(us) 00:22:29.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.464 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:29.464 raid5f : 1.01 26469.99 103.40 0.00 0.00 4820.17 1368.62 6658.88 00:22:29.465 =================================================================================================================== 00:22:29.465 Total : 26469.99 103.40 0.00 0.00 4820.17 1368.62 6658.88 00:22:31.370 00:22:31.370 real 0m3.634s 00:22:31.370 user 0m3.204s 00:22:31.370 sys 0m0.298s 00:22:31.370 13:56:41 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.370 13:56:41 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:31.370 ************************************ 00:22:31.370 END TEST bdev_write_zeroes 00:22:31.370 ************************************ 00:22:31.370 13:56:41 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:31.370 13:56:41 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:31.370 13:56:41 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.370 13:56:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:31.370 ************************************ 00:22:31.370 START TEST bdev_json_nonenclosed 00:22:31.370 ************************************ 00:22:31.370 13:56:41 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:31.370 [2024-10-01 13:56:41.420555] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:31.370 [2024-10-01 13:56:41.420688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90894 ] 00:22:31.629 [2024-10-01 13:56:41.592981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.629 [2024-10-01 13:56:41.811502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.629 [2024-10-01 13:56:41.811598] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:31.629 [2024-10-01 13:56:41.811628] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:31.629 [2024-10-01 13:56:41.811640] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:32.196 00:22:32.196 real 0m0.917s 00:22:32.196 user 0m0.657s 00:22:32.196 sys 0m0.153s 00:22:32.196 13:56:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.196 13:56:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:32.196 ************************************ 00:22:32.196 END TEST bdev_json_nonenclosed 00:22:32.196 ************************************ 00:22:32.196 13:56:42 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:32.196 13:56:42 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:22:32.196 13:56:42 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.196 13:56:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.196 ************************************ 00:22:32.196 START TEST bdev_json_nonarray 00:22:32.196 ************************************ 00:22:32.196 13:56:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:32.454 [2024-10-01 13:56:42.414636] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:32.454 [2024-10-01 13:56:42.414774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90925 ] 00:22:32.454 [2024-10-01 13:56:42.592639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.711 [2024-10-01 13:56:42.827578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.711 [2024-10-01 13:56:42.827690] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:32.711 [2024-10-01 13:56:42.827722] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:32.711 [2024-10-01 13:56:42.827737] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:33.278 00:22:33.278 real 0m0.970s 00:22:33.278 user 0m0.710s 00:22:33.278 sys 0m0.153s 00:22:33.278 13:56:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:33.278 13:56:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:33.278 ************************************ 00:22:33.278 END TEST bdev_json_nonarray 00:22:33.278 ************************************ 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:33.278 13:56:43 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:33.278 00:22:33.278 real 0m51.924s 00:22:33.278 user 1m9.030s 00:22:33.278 sys 0m5.716s 00:22:33.278 13:56:43 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:33.278 13:56:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:33.278 ************************************ 00:22:33.278 END TEST blockdev_raid5f 00:22:33.278 ************************************ 00:22:33.278 13:56:43 -- spdk/autotest.sh@194 -- # uname -s 00:22:33.278 13:56:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:33.278 13:56:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:33.278 13:56:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:33.278 13:56:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:33.278 13:56:43 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:22:33.278 13:56:43 -- spdk/autotest.sh@256 -- # timing_exit lib 00:22:33.278 13:56:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.278 13:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:33.537 13:56:43 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:33.537 13:56:43 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:33.537 13:56:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:33.537 13:56:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:33.537 13:56:43 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:33.537 13:56:43 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:33.537 13:56:43 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:33.537 13:56:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.537 13:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:33.537 13:56:43 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:33.537 13:56:43 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:33.537 13:56:43 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:33.537 13:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:36.121 INFO: APP EXITING 00:22:36.121 INFO: killing all VMs 00:22:36.121 INFO: killing vhost app 00:22:36.121 INFO: EXIT DONE 00:22:36.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:36.379 Waiting for block devices as requested 00:22:36.379 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:36.638 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:37.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:37.574 Cleaning 00:22:37.574 Removing: /var/run/dpdk/spdk0/config 00:22:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:37.574 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:37.574 Removing: /dev/shm/spdk_tgt_trace.pid56661 00:22:37.574 Removing: /var/run/dpdk/spdk0 00:22:37.574 Removing: /var/run/dpdk/spdk_pid56420 00:22:37.574 Removing: /var/run/dpdk/spdk_pid56661 00:22:37.574 Removing: /var/run/dpdk/spdk_pid56901 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57005 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57061 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57200 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57224 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57439 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57555 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57663 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57791 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57904 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57948 00:22:37.574 Removing: /var/run/dpdk/spdk_pid57986 00:22:37.574 Removing: /var/run/dpdk/spdk_pid58062 00:22:37.574 Removing: /var/run/dpdk/spdk_pid58179 00:22:37.574 Removing: /var/run/dpdk/spdk_pid58642 00:22:37.574 Removing: /var/run/dpdk/spdk_pid58718 00:22:37.574 Removing: /var/run/dpdk/spdk_pid58805 00:22:37.574 Removing: /var/run/dpdk/spdk_pid58821 00:22:37.574 Removing: /var/run/dpdk/spdk_pid58985 00:22:37.574 Removing: /var/run/dpdk/spdk_pid59001 00:22:37.574 Removing: /var/run/dpdk/spdk_pid59160 00:22:37.832 Removing: /var/run/dpdk/spdk_pid59182 00:22:37.832 Removing: /var/run/dpdk/spdk_pid59257 00:22:37.832 Removing: /var/run/dpdk/spdk_pid59275 00:22:37.832 Removing: /var/run/dpdk/spdk_pid59344 00:22:37.833 Removing: /var/run/dpdk/spdk_pid59368 00:22:37.833 Removing: /var/run/dpdk/spdk_pid59574 00:22:37.833 Removing: /var/run/dpdk/spdk_pid59615 00:22:37.833 Removing: /var/run/dpdk/spdk_pid59705 00:22:37.833 Removing: /var/run/dpdk/spdk_pid61081 00:22:37.833 Removing: /var/run/dpdk/spdk_pid61293 00:22:37.833 Removing: /var/run/dpdk/spdk_pid61444 00:22:37.833 Removing: /var/run/dpdk/spdk_pid62087 00:22:37.833 Removing: /var/run/dpdk/spdk_pid62299 00:22:37.833 Removing: /var/run/dpdk/spdk_pid62450 00:22:37.833 Removing: /var/run/dpdk/spdk_pid63099 00:22:37.833 Removing: /var/run/dpdk/spdk_pid63429 00:22:37.833 Removing: /var/run/dpdk/spdk_pid63569 00:22:37.833 Removing: /var/run/dpdk/spdk_pid64965 00:22:37.833 Removing: /var/run/dpdk/spdk_pid65218 00:22:37.833 Removing: /var/run/dpdk/spdk_pid65364 00:22:37.833 Removing: /var/run/dpdk/spdk_pid66756 00:22:37.833 Removing: /var/run/dpdk/spdk_pid67009 00:22:37.833 Removing: /var/run/dpdk/spdk_pid67149 00:22:37.833 Removing: /var/run/dpdk/spdk_pid68534 00:22:37.833 Removing: /var/run/dpdk/spdk_pid68980 00:22:37.833 Removing: /var/run/dpdk/spdk_pid69124 00:22:37.833 Removing: /var/run/dpdk/spdk_pid70613 00:22:37.833 Removing: /var/run/dpdk/spdk_pid70874 00:22:37.833 Removing: /var/run/dpdk/spdk_pid71027 00:22:37.833 Removing: /var/run/dpdk/spdk_pid72523 00:22:37.833 Removing: /var/run/dpdk/spdk_pid72783 00:22:37.833 Removing: /var/run/dpdk/spdk_pid72939 00:22:37.833 Removing: /var/run/dpdk/spdk_pid74438 00:22:37.833 Removing: /var/run/dpdk/spdk_pid74931 00:22:37.833 Removing: /var/run/dpdk/spdk_pid75082 00:22:37.833 Removing: /var/run/dpdk/spdk_pid75228 00:22:37.833 Removing: /var/run/dpdk/spdk_pid75703 00:22:37.833 Removing: /var/run/dpdk/spdk_pid76452 00:22:37.833 Removing: /var/run/dpdk/spdk_pid76854 00:22:37.833 Removing: /var/run/dpdk/spdk_pid77537 00:22:37.833 Removing: /var/run/dpdk/spdk_pid78000 00:22:37.833 Removing: /var/run/dpdk/spdk_pid78759 00:22:37.833 Removing: /var/run/dpdk/spdk_pid79168 00:22:37.833 Removing: /var/run/dpdk/spdk_pid81162 00:22:37.833 Removing: /var/run/dpdk/spdk_pid81606 00:22:37.833 Removing: /var/run/dpdk/spdk_pid82056 00:22:37.833 Removing: /var/run/dpdk/spdk_pid84161 00:22:37.833 Removing: /var/run/dpdk/spdk_pid84652 00:22:37.833 Removing: /var/run/dpdk/spdk_pid85175 00:22:37.833 Removing: /var/run/dpdk/spdk_pid86249 00:22:37.833 Removing: /var/run/dpdk/spdk_pid86572 00:22:37.833 Removing: /var/run/dpdk/spdk_pid87522 00:22:37.833 Removing: /var/run/dpdk/spdk_pid87845 00:22:38.091 Removing: /var/run/dpdk/spdk_pid88799 00:22:38.091 Removing: /var/run/dpdk/spdk_pid89122 00:22:38.091 Removing: /var/run/dpdk/spdk_pid89806 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90086 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90159 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90201 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90462 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90639 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90737 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90841 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90894 00:22:38.091 Removing: /var/run/dpdk/spdk_pid90925 00:22:38.091 Clean 00:22:38.091 13:56:48 -- common/autotest_common.sh@1451 -- # return 0 00:22:38.091 13:56:48 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:38.091 13:56:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.091 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:22:38.091 13:56:48 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:38.091 13:56:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.091 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:22:38.091 13:56:48 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:38.351 13:56:48 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:38.351 13:56:48 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:38.351 13:56:48 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:38.351 13:56:48 -- spdk/autotest.sh@394 -- # hostname 00:22:38.351 13:56:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:38.351 geninfo: WARNING: invalid characters removed from testname! 00:23:04.921 13:57:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:09.101 13:57:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:11.004 13:57:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:13.558 13:57:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:16.100 13:57:25 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:18.680 13:57:28 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:21.215 13:57:30 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:21.215 13:57:30 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:23:21.215 13:57:30 -- common/autotest_common.sh@1681 -- $ lcov --version 00:23:21.215 13:57:30 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:23:21.215 13:57:31 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:23:21.215 13:57:31 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:23:21.215 13:57:31 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:23:21.215 13:57:31 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:23:21.215 13:57:31 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:21.215 13:57:31 -- scripts/common.sh@336 -- $ read -ra ver1 00:23:21.215 13:57:31 -- scripts/common.sh@337 -- $ IFS=.-: 00:23:21.215 13:57:31 -- scripts/common.sh@337 -- $ read -ra ver2 00:23:21.215 13:57:31 -- scripts/common.sh@338 -- $ local 'op=<' 00:23:21.215 13:57:31 -- scripts/common.sh@340 -- $ ver1_l=2 00:23:21.215 13:57:31 -- scripts/common.sh@341 -- $ ver2_l=1 00:23:21.215 13:57:31 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:23:21.215 13:57:31 -- scripts/common.sh@344 -- $ case "$op" in 00:23:21.215 13:57:31 -- scripts/common.sh@345 -- $ : 1 00:23:21.215 13:57:31 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:23:21.215 13:57:31 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.215 13:57:31 -- scripts/common.sh@365 -- $ decimal 1 00:23:21.215 13:57:31 -- scripts/common.sh@353 -- $ local d=1 00:23:21.215 13:57:31 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:21.215 13:57:31 -- scripts/common.sh@355 -- $ echo 1 00:23:21.215 13:57:31 -- scripts/common.sh@365 -- $ ver1[v]=1 00:23:21.215 13:57:31 -- scripts/common.sh@366 -- $ decimal 2 00:23:21.215 13:57:31 -- scripts/common.sh@353 -- $ local d=2 00:23:21.215 13:57:31 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:21.215 13:57:31 -- scripts/common.sh@355 -- $ echo 2 00:23:21.215 13:57:31 -- scripts/common.sh@366 -- $ ver2[v]=2 00:23:21.215 13:57:31 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:23:21.215 13:57:31 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:23:21.215 13:57:31 -- scripts/common.sh@368 -- $ return 0 00:23:21.215 13:57:31 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.215 13:57:31 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:23:21.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.215 --rc genhtml_branch_coverage=1 00:23:21.215 --rc genhtml_function_coverage=1 00:23:21.215 --rc genhtml_legend=1 00:23:21.215 --rc geninfo_all_blocks=1 00:23:21.215 --rc geninfo_unexecuted_blocks=1 00:23:21.215 00:23:21.215 ' 00:23:21.215 13:57:31 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:23:21.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.215 --rc genhtml_branch_coverage=1 00:23:21.215 --rc genhtml_function_coverage=1 00:23:21.215 --rc genhtml_legend=1 00:23:21.215 --rc geninfo_all_blocks=1 00:23:21.215 --rc geninfo_unexecuted_blocks=1 00:23:21.215 00:23:21.215 ' 00:23:21.215 13:57:31 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:23:21.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.215 --rc genhtml_branch_coverage=1 00:23:21.215 --rc genhtml_function_coverage=1 00:23:21.215 --rc genhtml_legend=1 00:23:21.215 --rc geninfo_all_blocks=1 00:23:21.215 --rc geninfo_unexecuted_blocks=1 00:23:21.215 00:23:21.215 ' 00:23:21.216 13:57:31 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:23:21.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.216 --rc genhtml_branch_coverage=1 00:23:21.216 --rc genhtml_function_coverage=1 00:23:21.216 --rc genhtml_legend=1 00:23:21.216 --rc geninfo_all_blocks=1 00:23:21.216 --rc geninfo_unexecuted_blocks=1 00:23:21.216 00:23:21.216 ' 00:23:21.216 13:57:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.216 13:57:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:23:21.216 13:57:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:21.216 13:57:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.216 13:57:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.216 13:57:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.216 13:57:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.216 13:57:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.216 13:57:31 -- paths/export.sh@5 -- $ export PATH 00:23:21.216 13:57:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.216 13:57:31 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:21.216 13:57:31 -- common/autobuild_common.sh@479 -- $ date +%s 00:23:21.216 13:57:31 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727791051.XXXXXX 00:23:21.216 13:57:31 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727791051.br8nNm 00:23:21.216 13:57:31 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:23:21.216 13:57:31 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:23:21.216 13:57:31 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:21.216 13:57:31 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:21.216 13:57:31 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:21.216 13:57:31 -- common/autobuild_common.sh@495 -- $ get_config_params 00:23:21.216 13:57:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:23:21.216 13:57:31 -- common/autotest_common.sh@10 -- $ set +x 00:23:21.216 13:57:31 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:23:21.216 13:57:31 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:23:21.216 13:57:31 -- pm/common@17 -- $ local monitor 00:23:21.216 13:57:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:21.216 13:57:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:21.216 13:57:31 -- pm/common@25 -- $ sleep 1 00:23:21.216 13:57:31 -- pm/common@21 -- $ date +%s 00:23:21.216 13:57:31 -- pm/common@21 -- $ date +%s 00:23:21.216 13:57:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727791051 00:23:21.216 13:57:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727791051 00:23:21.216 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727791051_collect-vmstat.pm.log 00:23:21.216 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727791051_collect-cpu-load.pm.log 00:23:22.153 13:57:32 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:23:22.153 13:57:32 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:23:22.153 13:57:32 -- spdk/autopackage.sh@14 -- $ timing_finish 00:23:22.153 13:57:32 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:22.153 13:57:32 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:22.153 13:57:32 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:22.153 13:57:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:22.153 13:57:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:22.153 13:57:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:22.153 13:57:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:22.153 13:57:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:22.153 13:57:32 -- pm/common@44 -- $ pid=92449 00:23:22.153 13:57:32 -- pm/common@50 -- $ kill -TERM 92449 00:23:22.153 13:57:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:22.153 13:57:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:22.153 13:57:32 -- pm/common@44 -- $ pid=92450 00:23:22.153 13:57:32 -- pm/common@50 -- $ kill -TERM 92450 00:23:22.153 + [[ -n 5201 ]] 00:23:22.153 + sudo kill 5201 00:23:22.161 [Pipeline] } 00:23:22.178 [Pipeline] // timeout 00:23:22.184 [Pipeline] } 00:23:22.198 [Pipeline] // stage 00:23:22.203 [Pipeline] } 00:23:22.217 [Pipeline] // catchError 00:23:22.227 [Pipeline] stage 00:23:22.229 [Pipeline] { (Stop VM) 00:23:22.242 [Pipeline] sh 00:23:22.522 + vagrant halt 00:23:25.817 ==> default: Halting domain... 00:23:32.433 [Pipeline] sh 00:23:32.713 + vagrant destroy -f 00:23:36.034 ==> default: Removing domain... 00:23:36.045 [Pipeline] sh 00:23:36.325 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:23:36.333 [Pipeline] } 00:23:36.350 [Pipeline] // stage 00:23:36.356 [Pipeline] } 00:23:36.372 [Pipeline] // dir 00:23:36.377 [Pipeline] } 00:23:36.392 [Pipeline] // wrap 00:23:36.399 [Pipeline] } 00:23:36.413 [Pipeline] // catchError 00:23:36.422 [Pipeline] stage 00:23:36.424 [Pipeline] { (Epilogue) 00:23:36.436 [Pipeline] sh 00:23:36.716 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:42.015 [Pipeline] catchError 00:23:42.017 [Pipeline] { 00:23:42.032 [Pipeline] sh 00:23:42.319 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:42.319 Artifacts sizes are good 00:23:42.328 [Pipeline] } 00:23:42.342 [Pipeline] // catchError 00:23:42.354 [Pipeline] archiveArtifacts 00:23:42.362 Archiving artifacts 00:23:42.544 [Pipeline] cleanWs 00:23:42.555 [WS-CLEANUP] Deleting project workspace... 00:23:42.555 [WS-CLEANUP] Deferred wipeout is used... 00:23:42.562 [WS-CLEANUP] done 00:23:42.564 [Pipeline] } 00:23:42.580 [Pipeline] // stage 00:23:42.585 [Pipeline] } 00:23:42.599 [Pipeline] // node 00:23:42.604 [Pipeline] End of Pipeline 00:23:42.643 Finished: SUCCESS